Discover more from Basic Web Guy
Net neutrality returns to the FCC
“Today, there is no expert agency ensuring that the internet is fast, open, and fair." Plus: How AI is bringing back the deceased.
The fight for net neutrality has returned to the FCC.
On October 19, the Federal Communications Commission voted in favor to move forward proposed rule-making that will act as the first step to reinstating net neutrality.
“Today, there is no expert agency ensuring that the internet is fast, open, and fair. And for everyone, everywhere to enjoy the full benefits of the internet age, internet access needs to be more than just accessible and affordable,” Chairwoman Jessica Rosenworcel said ahead of the vote.
“The internet needs to be open,” she added.
Net neutrality supporters are celebrating the FCC’s move, which will classify broadband as a Title II communication service. Devin Coldewey of TechCrunch calls this change “a distinction that has been debated for decades but ultimately makes perfect sense.”
Coldewey adds that:
…internet providers are meant to act as pipes for data the same way phone companies do for calls. Of course this distinction has become more complex, but the legal and expert consensus is that broadband should be regulated like a telecom rather than a tech company — like AT&T rather than Microsoft.
Title II would impose the same set of strict rules as public utilities. This would prevent ISPs from throttling or blocking internet traffic.
History shows that ISPs are guilty of preventing a fair and open internet. In 2018, Verizon throttled a fire department during a California wildfire.
And in 2007 Comcast nearly shut off access to BitTorrent. At the time, NBC News called it “a move that runs counter to the tradition of treating all types of Net traffic equally.” Adding that “Comcast's interference…appears to be an aggressive way of managing its network to keep file-sharing traffic from swallowing too much bandwidth and affecting the Internet speeds of other subscribers.”
Net neutrality has had a roller coaster ride over the years.
In 2015, the FCC, under then-Chairman Tom Wheeler, voted to establish net neutrality rules that reclassified broadband as a utility under Title II of the Communications Act. This prevented broadband providers from throttling sites or services, or from offering paid prioritization for certain services.
But then in 2017, the FCC Chairman at the time, Ajit Pai, repealed the net neutrality rules. This is largely because Pai believes regulation results in government control. In 2020, Pai claimed that despite the repeal, the “internet has remained free and open. And it’s stronger than ever.”
Coldewey says there are two sides to the net neutrality argument:
The basic argument against net neutrality is that the internet ain’t broke, so don’t fix it, especially not by reclassifying it in a way that could change a great deal, resulting in more and worse government interference. The basic argument in favor of it is that, fundamentally speaking, broadband is a communications service that the Federal Communications Commission should regulate, resulting in more and better consumer protections.
No matter which side you’re on, net neutrality will likely face lengthy legal battles in the coming months.
“Reinstating Title II is now an article of faith for many in Washington (and a handy fundraising tool to boot),” FCC Commissioner Carr said in a statement. “But make no mistake: any FCC decision to impose Title II on the Internet will be overturned by the courts, by Congress, or by a future FCC.”
The FCC is currently accepting public comments on the proposal. Once the commenting period closes, the agency will take a final vote.
How AI is bringing back the deceased
Axios is reporting on how AI is being used to create videos of the deceased. While this practice may help those grieving a loss, it also raises privacy and consent issues.
“Like most of what's happening in generative AI innovation right now, startups and content creators are already creating AI-generated facsimiles of the dead without considering the consequences,” notes Axios.
Some find the practice disturbing, especially when it impacts them personally.
Zelda Williams — daughter of comedian and actor Robin Williams, who died by suicide in 2014 — publicly called AI deepfakes of her father "personally disturbing" earlier this month.
Williams event took to Instagram to voice her opinion.
"These recreations are, at their very best, a poor facsimile of greater people, but at their worst, a horrendous Frankensteinian monster, cobbled together from the worst bits of everything this industry is, instead of what it should stand for," she wrote.
But not everyone feels disturbed by the videos.
In 2020, Kim Kardashian gushed on Twitter about the AI hologram of her dead father that Kanye gave her for her birthday.
You can see these AI-generated experiences yourself by attending “live performances” of deceased musicians including Buddy Holly and Roy Orbison—though I’m not sure who would want to see this.
Catch up quick
Twitter alternative Pebble is shutting down. The social network formerly known at T2 had only a small user base with fewer than 3,000 daily active users. “I think the competitive landscape evolved faster than we had thought,” CEO Gabor Cselle told TechCrunch. “I didn’t think that quite as many people — established organizations and newcomers — would try to do the same thing that we were doing and in very similar ways.”
41 states and DC are suing Meta, claiming the company misled the public about the harm social media causes for young people. “The states also allege that Meta knowingly has marketed its products to users under the age of 13, who are barred from the platform by both Meta’s policies and federal law,” notes The Wall Street Journal. “The states are seeking to force Meta to change product features that they say pose dangers to young users.”
Cybersecurity experts are warning of ChatGPT-written phising emails. "Cybersecurity officials and industry leaders have long warned that hackers could weaponize ChatGPT and similar AI tools to quickly write phishing emails that the average person would think are authentic," notes Axios. "ChatGPT developer OpenAI has put in safeguards that prevent the generative AI chatbot from responding to direct requests for a phishing email, malware or other malicious cyber tools."