Childproofing the Internet

For the past several years, lawmakers and bureaucrats around the country have been trying to solve a problem. They wanted to regulate the internet, and in particular, they wanted to censor content and undermine a variety of systems that allow for privacy and anonymity online—the systems, in other words, that allow for online individuals to conduct themselves freely and outside of the purview of politicians.

There was something like a bipartisan agreement on the necessity of these rules and regulations. Lawmakers and regulators test-drove a number of potential arguments for online speech rules, including political biaspolitical extremismdrug crime, or the fact some tech companies are just really big. But it turned out to be quite difficult to drum up support for wonky causes like antitrust reform or amending the internet liability law Section 230, and even harder to make the case that the sheer size of companies like Amazon was really the problem.

Their efforts tended to falter because they lacked a consensus justification. Those in power knew what they wanted to do. They just didn’t know why, or how.

But in statehouses and in Congress today, that problem appears to have been solved. Politicians looking to censor online content and more tightly regulate digital life have found their reason: child safety.

Keep reading

Congress To Investigate WHO Plans To Use “Listening Surveillance Systems” To Identify “Misinformation”

If you’ve been following our reporting on the issue, you’ll already know that the new World Health Organization (WHO) pandemic prevention initiative, the Preparedness and Resilience for Emerging Threats (PRET), recommends using “social listening surveillance systems” to identify “misinformation.” But as more people are learning about how unelected bodies are being used to suppress speech and potentially override sovereignty, it’s starting to get more pushback.

According to documents from the UN agency, PRET aims to “guide countries in pandemic planning” and work to “incorporate the latest tools and approaches for shared learning and collective action established during the COVID-19 pandemic.”

The PRET document describes misinformation as a “health threat,” and refers to it as an “infodemic.”

“Infodemic is the overabundance of information – accurate or not – which makes it difficult for individuals to adopt behaviors that will protect their health and the health of their families and communities. The infodemic can directly impact health, hamper the implementation of public health countermeasures and undermine trust and social cohesiveness,” the document states.

However, it continues to recommend invasive methods of countering the spread of misinformation.

“Establish and invest in resources for social listening surveillance systems and capacities to identify concerns as well as rumors and misinformation,” the WHO wrote in the PRET document.

“To build trust, it’s important to be responsive to needs and concerns, to relay timely information, and to train leaders and HCWs in risk communications principles and encourage their application.

Keep reading

Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Keep reading

New Democrat Bill Calls For A Federal Agency To Create “Behavioral Codes,” Introduce “Disinformation Experts”

On May 18, two US senators introduced the Digital Platform Commission Act of 2023, a bill that seeks to give powers to a new federal agency that will set up a council to regulate AI, in the context of social platforms.

More precisely, the new body – the Federal Digital Platform Commission – would “rule” on what’s termed as enforceable “behavioral codes,” and among those staffing it will be “disinformation experts.”

The move by the two Democratic senators – Michael Bennet and Peter Welch – seems to have come in concert with congressional testimony delivered by OpenAI CEO Sam Altman, since the bill was presented shortly afterwards, and backs Altman’s idea to form a new federal agency of the kind.

Altman had more thoughts on how all this should work – the new agency, according to him, might be given the power to restrict AI development via licenses or credentialing.

The speed with which the two senators picked up on this to announce their bill may owe to the fact Bennet “only” had to go back and update one he already introduced in 2022. This time around, the proposed legislation has been changed in a number of ways, most notably by redefining what a digital platform is.

The bill wants this definition to also cover those companies that provide content “primarily” generated by algorithmic processes. This is done by proposing that the future Commission be given the authority over how personal information is used in decision-making or content generation, which is thought to specifically refer to tech like ChatGPT.

Keep reading

The Internet Dodges Censorship by the Supreme Court

The Supreme Court today refused to weaken one of the key laws supporting free expression online, and recognized that digital platforms are not usually liable for their users’ illegal acts, ensuring that everyone can continue to use those services to speak and organize.

The decisions in Gonzalez v. Google and Twitter v. Taamneh are great news for a free and vibrant internet, which inevitably depends on services that host our speech. The court in Gonzalez declined to address the scope of 47 U.S.C. § 230 (“Section 230”), which generally protects users and online services from lawsuits based on content created by others. Section 230 is an essential part of the legal architecture that enables everyone to connect, share ideas, and advocate for change without needing immense resources or technical expertise. By avoiding addressing Section 230, the Supreme Court avoided weakening it.

In Taamneh, the Supreme Court rejected a legal theory that would have made online services liable under the federal Justice Against Sponsors of Terrorism Act on the theory that members of terrorist organizations or their supporters simply used these services like we all do: to create and share content. The decision is another win for users’ online speech, as it avoids an outcome where providers censor far more content than they do already, or even prohibit certain topics or users entirely when they could later be held liable for aiding or abetting their user’s wrongful acts.

Given the potential for both decisions to have disastrous consequences for users’ free expression, EFF is pleased that the Supreme Court left existing legal protections for online speech legal in place.

But we cannot rest easy. There are pressing threats to users’ online speech as Congress considers legislation to weaken Section 230 and otherwise expand intermediary liability. Users must continue to advocate for their ability to have a free and open internet that everyone can use.

Keep reading

These Senators Want the Federal Government To Verify Your Age Online

Despite their many disagreements, Republicans and Democrats have developed a common affinity for social media regulation, largely relying on the disputed assumption that platforms like Instagram and TikTok severely degrade children’s mental health. The latest regulatory proposal in Congress is the Protecting Kids on Social Media Act, sponsored by a bipartisan group of four senators: Sens. Brian Schatz (D–Hawaii), Tom Cotton (R–Ark.), Chris Murphy (D–Conn.), and Katie Britt (R–Ala.).

The bill features several flawed policies, drawing from recent state and federal social media proposals. It would require social media platforms to verify the age of every would-be user. Platforms could allow the unverified to view content, but not to interact with it or with other users. After providing age verification to register an account, underage teens would need proof of parental consent. Those under 13 years old would be completely barred from registering accounts.

The bill does propose one novel—and potentially dangerous—innovation. It would establish a “pilot program” for a federally run verification system. This system would ascertain social media users’ age and, for teen users, confirm parental consent.

Age verification mandates, which invariably entail intrusive data gathering, threaten user data privacy and security. They also violate the individual’s right to speak freely and anonymously online. Although the bill’s authors sought to mitigate the risks their implementation would pose to users, they largely failed. Such risks are inextricable from the process of age verification itself. The bill proposes a legal safe harbor for social media platforms that choose to use the pilot program. To avoid even the appearance of noncompliance, many platforms will do just that.

The proposed pilot program would require would-be social media users to submit documentation to the Department of Commerce in order to verify their age. In return, the pilot program would provide a “credential” to be submitted to social media platforms. Users would verify parental consent by the same process. To administer the program, the government would necessarily obtain and store troves of personal data on American social media users—to prove regulatory compliance, if nothing else.

To protect user privacy, the bill directs Commerce to “keep no records of the social media platforms where users have verified their identity.” It would also forbid the agency from sharing user data with platforms or law enforcement without user consent, a court order, or a program-specific fraud or oversight investigation.

Nonetheless, the bill would require users to register personal information with state authorities simply to speak online. Government agencies, under a legal pretext, could retrieve from social media platforms the records necessary to identify user accounts. Democrats have long been skeptical of the federal government’s data abuses, but both partiesincluding newly skeptical Republicans—ought to understand these risks.

Keep reading

The EARN IT Act, an attack on encrypted communications, to be reintroduced next week

Those behind the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act must be hoping that third time’s a charm for this previously widely-opposed piece of legislation, that is set to be reintroduced next week.

The previous two attempts to make EARN IT into law failed amid outcry from opponents who said that while designed to protect children, the bill would fail to do that – but would still damage online privacy.

Now here’s the third, bipartisan attempt sponsored by Republican Lindsey Graham and Democrat Richard Blumenthal to bring changes to the Communication Decency Act (CDA) Section 230.

Critics say that the amendment as envisaged by EARN IT would harm internet users by removing legal protections Section 230 gives tech companies for third party content.

The consequence would be those companies protecting themselves by engaging in (even more) censorship, and “working” with the government to this end – even more than we are aware they already do.

At the core of EARN IT is to target platforms for violations related to child sexual abuse material (CSAM) rules that exist at the federal and state level.

But allegedly, these platforms are reluctant to “moderate” i.e., censor content in a heavy-handed manner, and for that reason oppose the legislation.

Keep reading

Parody Hitman Website Nabs Air National Guardsman After He Allegedly Applied For Murder-For-Hire Jobs

A Hermitage, Tennessee, man is facing federal charges after meeting with an undercover FBI agent to culminate a deal to murder an individual for payment, announced U.S. Attorney Henry C. Leventis.

Josiah Ernesto Garcia, 21, was charged yesterday in a criminal complaint with the use of interstate facilities in the commission of murder-for-hire.

According to the complaint, Garcia needed money to support his family and in mid-February began searching online for contract mercenary jobs and came across the website www.rentahitman.com. Originally created in 2005 to advertise a cyber security startup company, the company failed and over the next decade it received many inquiries about murder-for-hire services. The website’s administrator then converted the website to a parody site that contains false testimonials from those who have purported to use hit man services, and an intake form where people can request services. The website also has an option for someone to apply to work as a hired killer.

Garcia submitted an employment inquiry indicating that he was interested in obtaining employment as a hit man. Garcia followed up on this initial request and submitted other identification documents and a resume, indicating he was an expert marksman and employed in the Air National Guard since July 2021. The resume also indicated that Garcia was nicknamed “Reaper” which was earned from military experience and marksmanship. Garcia continued to follow up with the website administrator indicating that he wanted to go to work as soon as possible.

Keep reading

Biden Looking at Expanding Internet Surveillance After Discord Leaks

The Biden administration appears poised to increase internet surveillance in response to the leaked Pentagon documents that appear to have been posted on the messaging platform Discord.

NBC News reported on Wednesday that the administration was looking at expanding how it monitors social media sites and chat rooms.

The report cited an unnamed senior administration official and a congressional official who said the administration wants to “expand the universe” of social media sites that US law enforcement and intelligence agencies monitor.

According to the congressional source, the report said the “intelligence community is now grappling with how it can scrub platforms like Discord in search of relevant material to avoid a similar leak in the future.”

According to The Washington Post, the top-secret documents were posted on a private Discord server that a member later posted on public servers in March. The documents have been circulating on the internet since then and were discovered by The New York Times last week.

Keep reading

Italy’s ChatGPT ban fails to deter users due to VPNs

VPN applications have gained a vast amount of new users after Italy’s ChatGPT ban. People are using the apps to access the chatbot.

Recently, the Italian government banned ChatGPT due to privacy reasons. However, this didn’t stop people from reaching OpenAI’s services. PureVPN has noticed an odd increase in traffic coming from Italy on their website after the ban went into effect on April 1. According to a recent blog post from the company, “Italians have been turning to VPN services following the decision of the country’s data protection authority to ban ChatGPT over privacy concerns.”

ChatGPT is one of the most popular topics on the internet. People from all around the globe use it to get their work done easier. However, the Italian government prevented the chatbot from being used in the country. Authorities further alleged that OpenAI failed to verify its users’ ages and enforce prohibitions barring anyone under the age of 13 from using ChatGPT.

“ChatGPT has garnered more than 100 million users since its launch two months ago. The advanced chatbot is just as popular in Italy as in other countries because of its ability to have human-like conversations. However, with Italians unable to access ChatGPT, many of them are turning to VPNs to circumvent the block,” says PureVPN.

VPNs can help users mask their real IP addresses and use a different one from their selected country. Italian people use VPNs and an IP address of another country to access ChatGPT.

Keep reading