EU Defends Censorship Law While Commission Staff Shift to Auto-Deleting Signal Messages

A senior European Union official responsible for enforcing online speech rules is objecting to what he describes as intimidation by Washington, even as his own agency advances policies that expand state involvement in digital expression and private communications.

Speaking Monday at the University of Amsterdam, Prabhat Agarwal, who leads enforcement of the Digital Services Act at the European Commission, urged regulators and civil society groups not to retreat under pressure from the United States. His remarks followed the February 3 release of a report by the US House Judiciary Committee that included the names and email addresses of staff involved in enforcing and promoting Europe’s censorship laws.

“Don’t let yourself be scared. We at the Commission stand by the European civil society organizations that have been threatened, and we stand by our teams as well,” Agarwal said, as reported by Politico.

The report’s publication came shortly after Washington barred a former senior EU official and two civil society representatives from entering the United States. European officials interpreted those moves as an effort to deter implementation of the DSA, the bloc’s flagship content regulation framework governing large online platforms.

Keep reading

Starmer Announces Yet More Censorship

Even more censorship is on the way. The Government has announced plans to force AI chatbots to comply with malicious communications laws – and to give itself Orwellian powers to bring in yet more speech restrictions without Parliamentary oversight. Toby writes about the moves in the Telegraph.

The Government intends to bring forward amendments of its own to the schools Bill that will supposedly close a loophole in the Online Safety Act to make sure AI chatbots comply with Britain’s draconian censorship laws. That will mean that if Grok says something in response to a user prompt that breaches, say, the Malicious Communications Act 1988, which was designed to protect women from obscene phone calls, Ofcom can fine its parent company £18 million or 10% of its annual global turnover. Whichever is the highest.

This will be the death knell of Britain’s burgeoning AI sector, particularly as chatbots become more autonomous. What tech entrepreneur will risk setting up an AI company in the UK, knowing that if a chatbot shares an anti-immigration meme or misgenders a trans person, it could mean a swingeing fine?

Indeed, I wouldn’t be surprised if xAI, along with OpenAI and Anthropic, decide to withdraw access to their chatbots from UK residents. At the very least, we’ll be saddled with lobotomised versions that trot out progressive bromides whenever they’re asked a political question.

In addition, the Government has said it will pass a new law to stop children sending or receiving nude images. Needless to say, that’s already a criminal offence under the Protection of Children Act 1978, so what does the Government have in mind?

It has not said, but I fear it means embedding surveillance software in every smartphone to enable the authorities to monitor users’ activity, no doubt accompanied by mandatory digital ID so no one will be able to hide. Not even the People’s Republic of China does that.

The Government unveiled some other Orwellian measures, but rather than bring them in as revisions to the schools Bill, it will put through amendments that will enable it to make further changes to Britain’s censorship regime via secondary legislation, i.e., it will grant itself sweeping Henry VIII powers.

It’s worth bearing in mind that secondary legislation cannot be amended and allows little time for debate. The Government’s excessive reliance on secondary legislation has been criticised by the House of Lords Constitution Committee and the Delegated Powers and Regulatory Reform Committee.

Keep reading

US plans online portal to bypass content bans in Europe and elsewhere

The U.S. State Department is developing an online portal that will enable people in Europe and elsewhere to see content banned by their governments including alleged hate speech and terrorist propaganda, a move Washington views as a way to counter censorship, three sources familiar with the plan said.

The site will be hosted at “freedom.gov,” the sources said. One source said officials had discussed including a virtual private network function to make a user’s traffic appear to originate in the U.S. and added that user activity on the site will not be tracked.

Headed by Undersecretary for Public Diplomacy Sarah Rogers, the project was expected to be unveiled at last week’s Munich Security Conference but was delayed, the sources said.

Reuters could not determine why the launch did not happen, but some State Department officials, including lawyers, have raised concerns about the plan, two of the sources said, without detailing the concerns.

The project could further strain ties between the Trump administration and traditional U.S. allies in Europe, already heightened by disputes over trade, Russia’s war in Ukraine and President Donald Trump’s push to assert control over Greenland.

The portal could also put Washington in the unfamiliar position of appearing to encourage citizens to flout local laws.

In a statement to Reuters, a State Department spokesperson said the U.S. government does not have a censorship-circumvention program specific to Europe but added: “Digital freedom is a priority for the State Department, however, and that includes the proliferation of privacy and censorship-circumvention technologies like VPNs.”

The spokesperson denied any announcement had been delayed and said it was inaccurate that State Department lawyers had raised concerns.

Keep reading

Senators Talk Digital Freedom for Iran While Expanding Surveillance at Home

Three US senators want federal funding to help Iranians bypass censorship and access VPNs. The same three senators have spent years supporting the surveillance systems that track Americans online.

We obtained a copy of their letter to Secretary of State Marco Rubio for you here.

Senators Lindsey Graham (R-SC), James Lankford (R-OK), and Jacky Rosen (D-NV) are backing funding for anti-censorship technology and virtual private networks abroad.

Senator Cory Booker (D-NJ), whose privacy record is largely clean, is also supporting the effort. The bipartisan coalition wants to help people circumvent government internet controls. Just not the American government’s internet controls.

Graham’s voting record reads like a blueprint for the surveillance state he claims to oppose overseas. He voted for the Patriot Act in 2001 and has supported every major expansion since. When Section 702 of FISA came up for reauthorization, Graham backed it. When Congress considered making Section 702 permanent in 2017 with no sunset clauses and no congressional review, Graham backed that too.

His encryption stance is just as consistent. Graham co-sponsored the EARN IT Act in 2020, which would pressure platforms to weaken encryption to avoid liability.

He also backed the Lawful Access to Encrypted Data (LAED) Act, a bill that would require companies to build backdoors into their security systems. VPNs work because of encryption. Graham has spent years trying to break it.

He’s also pushed to repeal Section 230 protections and supported requiring government licenses for companies offering AI tools. When surveillance mechanisms he championed caught his own communications, Graham complained. Privacy for senators. Mass surveillance for everyone else.

Lankford introduced the Free Speech Fairness Act, which removed restrictions on political speech by religious and nonprofit organizations. That same senator has backed the Kids Online Safety Act (KOSA), which will likely require platforms to implement age verification and give regulators the power to pressure companies into removing content.

He called for Section 230 to be “ripped up” and backed a national strategy against antisemitism that includes government coordination on speech. When Edward Snowden revealed the scope of NSA surveillance, Lankford branded him a traitor for telling the public what their government was doing.

Keep reading

META Granted Patent That Allows Facebook Users to Post After They Die

A patent granted to META in December gives an inside look at how Facebook users may still be able to post beyond the grave.

According to the patent, the large language model will be “simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

The AI model would essentially replicate the deceased person’s digital presence by analyzing data such as their writing style, tone, and expressed beliefs, allowing it to continue posting content and even commenting on friends’ posts in their voice.

It gets stranger.

The patent also referenced technology that would allow users to have phone calls with the deceased or even video calls.

Per Business Insider:

In the patent, Meta lays out why it thinks people might need this.

If you’re no longer posting online — whether that’s because you need a break from social media or … you … die — your followers’ user experience will be affected. In short, they’ll miss you.

“The impact on the users is much more severe and permanent if that user is deceased and can never return to the social networking platform,” the document says.

To fill that void, Meta would essentially create a digital clone of your social media presence, training a model on “user-specific” data — including historical platform activity, such as comments, likes, or content — to understand how you would (or rather, did) behave.

The patent was filed by META’s CTO, Andrew Bosworth.

Despite being granted the patent, META has claimed it has no plans to implement the technology.

In recent years, the idea of using AI to bring someone back from the dead digitally has gained some momentum.

In 2025, the family of Christopher Pelkey, who was killed in a road rage incident in Arizona, used an AI-generated video of Pelkey as an impact statement in the courtroom.

Keep reading

Keir Starmer Considers VPN ID Checks as UK Expands Online Safety Act Powers

Having already installed itself as the nation’s digital nanny with its online censorship law, the Online Safety Act, the government is now peering into the last remaining corner of online privacy and wondering whether it, too, might benefit from a sturdy padlock.

Prime Minister Keir Starmer has confirmed that ministers are examining new powers to move beyond social media age limits and into the architecture of private browsing itself. The latest idea involves ID checks for VPN use and chatbots.

Naturally, this is all for the children.

A VPN, or virtual private network, is often treated like a villainous contraption, but it’s actually a tool that encrypts your internet traffic and masks your location. In plain English, it stops internet providers, advertisers, and sometimes governments from tracking what you read, watch, or search.

Keep reading

40 State Attorneys General Want To Tie Online Access to ID

A bloc of 40 state and territorial attorneys general is urging Congress to adopt the Senate’s version of the controversial Kids Online Safety Act, positioning it as the stronger regulatory instrument and rejecting the House companion as insufficient.

The Act would kill online anonymity and tie online activity and speech to a real-world identity.

Acting through the National Association of Attorneys General, the coalition sent a letter to congressional leadership endorsing S. 1748 and opposing H.R. 6484.

We obtained a copy of the letter for you here.

Their request centers on structural differences between the bills. The Senate proposal would create a federally enforceable “Duty of Care” requiring covered platforms to mitigate defined harms to minors.

Enforcement authority would rest with the Federal Trade Commission, which could investigate and sue companies that fail to prevent minors from encountering content deemed to cause “harm to minors.”

That framework would require regulators to evaluate internal content moderation systems, recommendation algorithms, and safety controls.

S. 1748 also directs the Secretary of Commerce, the FTC, and the Federal Communications Commission to study “the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.”

This language moves beyond platform-level age gates and toward infrastructure embedded directly into hardware or operating systems.

Age verification at that layer would not function without some form of credentialing. Device-level verification would likely depend on digital identity checks tied to government-issued identification, third-party age verification vendors, or persistent account authentication systems.

That means users could be required to submit identifying information before accessing broad categories of lawful online speech. Anonymous browsing depends on the ability to access content without linking identity credentials to activity.

Keep reading

“Kids Off Social Media Act” Opens the Door to Digital ID by Default

Congress is once again stepping into the role of digital caretaker, this time through the Kids Off Social Media Act, with a proposal from Rep. Anna Paulina Luna that seeks to impose federal rules on how young people interact with the world.

The house companion bill (to go along with the senate bill) attempts to set national limits on who can hold social media accounts, how platforms may structure their systems, and what kinds of data they are allowed to use when dealing with children and teenagers.

Framed as a response to growing parental concern, the legislation reflects a broader push to regulate online spaces through age-based access and design mandates rather than direct content rules.

The proposal promises restraint while quietly expanding Washington’s reach into the architecture of online speech. Backers of the bill will insist it targets corporate behavior rather than expression itself. The bill’s mechanics tell a more complicated story.

The bill is the result of a brief but telling legislative evolution. Early versions circulated in 2024 were framed as extensions of existing child privacy rules rather than participation bans. Those drafts focused on limiting data collection, restricting targeted advertising to minors, and discouraging algorithmic amplification, while avoiding hard access restrictions or explicit age enforcement mandates.

That posture shifted as the bill gained bipartisan backing. By late 2024, lawmakers increasingly treated social media as an inherently unsafe environment for children rather than a service in need of reform. When the bill was reintroduced in January 2025, it reflected that change. The new version imposed a categorical ban on accounts for users under 13, restricted recommendation systems for users under 17, and strengthened enforcement through the Federal Trade Commission and state attorneys general, with Senate sponsorship led by Ted Cruz and Brian Schatz.

Keep reading

EU Law Could Extend Scanning of Private Messages Until 2027

The European Parliament is considering another extension of Chat Control 1.0, the “temporary” exemption that allows communications providers to scan private messages (under the premise of preventing child abuse) despite the protections of the EU’s ePrivacy Directive.

draft report presented by rapporteur Birgit Sippel (S&D) would prolong the derogation until April 3, 2027.

At first glance, the proposal appears to roll back some of the most controversial elements of Chat Control. Text message scanning and automated analysis of previously unknown images would be explicitly excluded. Supporters have framed this as a narrowing of scope.

However, the core mechanism of Chat Control remains untouched.

The draft continues to permit mass hash scanning of private communications for so-called “known” material.

According to former MEP and digital rights activist Patrick Breyer, approximately 99 percent of all reports generated under Chat Control 1.0 originate from hash-based detection.

Almost all of those reports come from a single company, Meta, which already limits its scanning to known material only. Under the new proposal, Meta’s practices would remain fully authorized.

As a result, the draft would not meaningfully reduce the volume, scope, or nature of surveillance. The machinery keeps running, with a few of its most visibly controversial attachments removed.

Hash scanning is often portrayed as precise and reliable. The evidence points in the opposite direction.

First, the technology is incapable of understanding context or intent. Hash databases are largely built using US legal definitions of illegality, which do not map cleanly onto the criminal law of EU Member States.

The German Federal Criminal Police Office (BKA) reports that close to half of all chat control reports are criminally irrelevant.

Each false positive still requires assessment, documentation, and follow-up. Investigators are forced to triage noise rather than pursue complex cases involving production, coercion, and organized abuse.

The strategic weakness is compounded by a simple reality. Offenders adapt. As more services adopt end-to-end encryption, abusers migrate accordingly. Since 2022, the number of chat-based reports sent to police has fallen by roughly 50 percent, not because abuse has declined, but because scanning has become easier to evade.

“Both children and adults deserve a paradigm shift in online child protection, not token measures,” Breyer said in a statement to Reclaim The Net.

“Whether looking for ‘known’ or ‘unknown’ content, the principle remains: the post office cannot simply open and scan every letter at random. Searching only for known images fails to stop ongoing abuse or rescue victims.”

Keep reading

EU Targets VPNs as Age Checks Expand

Australia’s under-16 social media restrictions have become a practical reference point for regulators who are moving beyond theory and into enforcement.

As the system settles into routine use, its side effects are becoming clearer. One of the most visible has been the renewed political interest in curbing tools that enable private communication, particularly Virtual Private Networks. That interest carries consequences well beyond “age assurance.”

January 2026 briefing we obtained from the European Parliamentary Research Service traces a sharp rise in VPN use following the introduction of mandatory age checks.

The report notes “a significant surge in the number of virtual private networks (VPNs) used to bypass online age verification methods in countries where these have been put in place by law,” placing that trend within a broader policy environment where “protection of children online is high on the political agenda.”

Australia’s experience fits this trajectory. As age gates tighten, individuals reach for tools that reduce exposure to monitoring and profiling. VPNs are the first port of call in that response because they are widely available, easy to use, and designed to limit third-party visibility into online activity.

The EPRS briefing offers a clear description of what these tools do. “A virtual private network (VPN) is a digital technology designed to establish a secure and encrypted connection between a user’s device and the internet.”

It explains that VPNs hide IP addresses and route traffic through remote servers in order to “protect online communications from interception and surveillance.” These are civil liberties functions, not fringe behaviors, and they have long been treated as legitimate safeguards in democratic societies.

Keep reading