Keir Starmer Considers VPN ID Checks as UK Expands Online Safety Act Powers

Having already installed itself as the nation’s digital nanny with its online censorship law, the Online Safety Act, the government is now peering into the last remaining corner of online privacy and wondering whether it, too, might benefit from a sturdy padlock.

Prime Minister Keir Starmer has confirmed that ministers are examining new powers to move beyond social media age limits and into the architecture of private browsing itself. The latest idea involves ID checks for VPN use and chatbots.

Naturally, this is all for the children.

A VPN, or virtual private network, is often treated like a villainous contraption, but it’s actually a tool that encrypts your internet traffic and masks your location. In plain English, it stops internet providers, advertisers, and sometimes governments from tracking what you read, watch, or search.

Keep reading

40 State Attorneys General Want To Tie Online Access to ID

A bloc of 40 state and territorial attorneys general is urging Congress to adopt the Senate’s version of the controversial Kids Online Safety Act, positioning it as the stronger regulatory instrument and rejecting the House companion as insufficient.

The Act would kill online anonymity and tie online activity and speech to a real-world identity.

Acting through the National Association of Attorneys General, the coalition sent a letter to congressional leadership endorsing S. 1748 and opposing H.R. 6484.

We obtained a copy of the letter for you here.

Their request centers on structural differences between the bills. The Senate proposal would create a federally enforceable “Duty of Care” requiring covered platforms to mitigate defined harms to minors.

Enforcement authority would rest with the Federal Trade Commission, which could investigate and sue companies that fail to prevent minors from encountering content deemed to cause “harm to minors.”

That framework would require regulators to evaluate internal content moderation systems, recommendation algorithms, and safety controls.

S. 1748 also directs the Secretary of Commerce, the FTC, and the Federal Communications Commission to study “the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.”

This language moves beyond platform-level age gates and toward infrastructure embedded directly into hardware or operating systems.

Age verification at that layer would not function without some form of credentialing. Device-level verification would likely depend on digital identity checks tied to government-issued identification, third-party age verification vendors, or persistent account authentication systems.

That means users could be required to submit identifying information before accessing broad categories of lawful online speech. Anonymous browsing depends on the ability to access content without linking identity credentials to activity.

Keep reading

“Kids Off Social Media Act” Opens the Door to Digital ID by Default

Congress is once again stepping into the role of digital caretaker, this time through the Kids Off Social Media Act, with a proposal from Rep. Anna Paulina Luna that seeks to impose federal rules on how young people interact with the world.

The house companion bill (to go along with the senate bill) attempts to set national limits on who can hold social media accounts, how platforms may structure their systems, and what kinds of data they are allowed to use when dealing with children and teenagers.

Framed as a response to growing parental concern, the legislation reflects a broader push to regulate online spaces through age-based access and design mandates rather than direct content rules.

The proposal promises restraint while quietly expanding Washington’s reach into the architecture of online speech. Backers of the bill will insist it targets corporate behavior rather than expression itself. The bill’s mechanics tell a more complicated story.

The bill is the result of a brief but telling legislative evolution. Early versions circulated in 2024 were framed as extensions of existing child privacy rules rather than participation bans. Those drafts focused on limiting data collection, restricting targeted advertising to minors, and discouraging algorithmic amplification, while avoiding hard access restrictions or explicit age enforcement mandates.

That posture shifted as the bill gained bipartisan backing. By late 2024, lawmakers increasingly treated social media as an inherently unsafe environment for children rather than a service in need of reform. When the bill was reintroduced in January 2025, it reflected that change. The new version imposed a categorical ban on accounts for users under 13, restricted recommendation systems for users under 17, and strengthened enforcement through the Federal Trade Commission and state attorneys general, with Senate sponsorship led by Ted Cruz and Brian Schatz.

Keep reading

EU Law Could Extend Scanning of Private Messages Until 2027

The European Parliament is considering another extension of Chat Control 1.0, the “temporary” exemption that allows communications providers to scan private messages (under the premise of preventing child abuse) despite the protections of the EU’s ePrivacy Directive.

draft report presented by rapporteur Birgit Sippel (S&D) would prolong the derogation until April 3, 2027.

At first glance, the proposal appears to roll back some of the most controversial elements of Chat Control. Text message scanning and automated analysis of previously unknown images would be explicitly excluded. Supporters have framed this as a narrowing of scope.

However, the core mechanism of Chat Control remains untouched.

The draft continues to permit mass hash scanning of private communications for so-called “known” material.

According to former MEP and digital rights activist Patrick Breyer, approximately 99 percent of all reports generated under Chat Control 1.0 originate from hash-based detection.

Almost all of those reports come from a single company, Meta, which already limits its scanning to known material only. Under the new proposal, Meta’s practices would remain fully authorized.

As a result, the draft would not meaningfully reduce the volume, scope, or nature of surveillance. The machinery keeps running, with a few of its most visibly controversial attachments removed.

Hash scanning is often portrayed as precise and reliable. The evidence points in the opposite direction.

First, the technology is incapable of understanding context or intent. Hash databases are largely built using US legal definitions of illegality, which do not map cleanly onto the criminal law of EU Member States.

The German Federal Criminal Police Office (BKA) reports that close to half of all chat control reports are criminally irrelevant.

Each false positive still requires assessment, documentation, and follow-up. Investigators are forced to triage noise rather than pursue complex cases involving production, coercion, and organized abuse.

The strategic weakness is compounded by a simple reality. Offenders adapt. As more services adopt end-to-end encryption, abusers migrate accordingly. Since 2022, the number of chat-based reports sent to police has fallen by roughly 50 percent, not because abuse has declined, but because scanning has become easier to evade.

“Both children and adults deserve a paradigm shift in online child protection, not token measures,” Breyer said in a statement to Reclaim The Net.

“Whether looking for ‘known’ or ‘unknown’ content, the principle remains: the post office cannot simply open and scan every letter at random. Searching only for known images fails to stop ongoing abuse or rescue victims.”

Keep reading

EU Targets VPNs as Age Checks Expand

Australia’s under-16 social media restrictions have become a practical reference point for regulators who are moving beyond theory and into enforcement.

As the system settles into routine use, its side effects are becoming clearer. One of the most visible has been the renewed political interest in curbing tools that enable private communication, particularly Virtual Private Networks. That interest carries consequences well beyond “age assurance.”

January 2026 briefing we obtained from the European Parliamentary Research Service traces a sharp rise in VPN use following the introduction of mandatory age checks.

The report notes “a significant surge in the number of virtual private networks (VPNs) used to bypass online age verification methods in countries where these have been put in place by law,” placing that trend within a broader policy environment where “protection of children online is high on the political agenda.”

Australia’s experience fits this trajectory. As age gates tighten, individuals reach for tools that reduce exposure to monitoring and profiling. VPNs are the first port of call in that response because they are widely available, easy to use, and designed to limit third-party visibility into online activity.

The EPRS briefing offers a clear description of what these tools do. “A virtual private network (VPN) is a digital technology designed to establish a secure and encrypted connection between a user’s device and the internet.”

It explains that VPNs hide IP addresses and route traffic through remote servers in order to “protect online communications from interception and surveillance.” These are civil liberties functions, not fringe behaviors, and they have long been treated as legitimate safeguards in democratic societies.

Keep reading

Dad claims 16-year-old daughter took her own life after meeting a predator on Roblox, slams game platform beloved by kids

Penelope Sokolowski was just 16 years old when she took her own life last February.

Her father, Jason, believes her suicide was the culmination of a grooming process that began on Roblox, the game platform beloved by kids — with some 170,000 users under the age of 13, according to company data from 2023.

“We kind of thought we were covering all the bases,” Jason told The Post, noting that his family had used a third-party app to monitor Penelope’s online activity.

Jason alleges that his only child was contacted by a predator on Roblox who coerced her into cutting his name into her chest and sending videos of herself bloodied from self-harm — and who, ultimately, sent Penelope down a spiral that culminated in her death.

The girl was 7 or 8 years old when she first signed up for Roblox, players rove around online worlds and can chat with other users.

“I’d come in and sit in the room with her and see what she was doing, ask who those people were,” Jason said, recalling Penelope drawing an anime-style sketch for a friend she’d made on Roblox.

“As a dad I thought, oh, this is nice, she’s artistic, and she’s made artistic friends,” he added. “But I didn’t understand what Roblox was and its effect on her.”

The dad, who works in the film industry in Vancouver, British Columbia, separated from Penelope’s mother and moved out of the family home when the girl was 13.

He recalls how Penelope’s grades began to tumble and, when she was 14, he noticed scars from self-inflicted cuts on her arms, which she had been covering with bracelets and his oversized hockey jerseys. 

Penelope confided that she had been recruited into a self-harm group via Roblox, but assured her father she had moved on.

But not long after her 16th birthday, she took her own life.

Later, when Jason opened up his daughter’s cell phone, he found what he describes as a “crime scene.”

According to the dad, there were messages spanning two years with a person who egged on her self-destruction. Jason believes Penelope met this person on Roblox and then began privately conversing with them over Discord — sometimes for hours.

In one exchange, Penelope sent a photo of her chest, offering to cut herself there but worrying she couldn’t go “too deep.” Minutes later, she followed up with an image of the predator’s Discord user name written across her chest in bloodied letters.

In other images, she had carved the numbers “764” into her body. Jason believes Penelope had been contacted by a member of 764, described by the FBI as a “violent online group” that targets minors and grooms them into committing egregious acts of self-harm and violence.

Members of 764 reportedly troll platforms like Roblox looking for victims they can persuade — via grooming or sextortion — into hurting themselves.

“They are grooming girls to do whatever it is they can get a girl to do, whether it’s nudes or cuts or gore or violence,” Jason said. “[Penelope] was brainwashed all the way through.”

Keep reading

Congressional Report Warns Britain Is Exporting Censorship Worldwide

The government of British Prime Minister Keir Starmer has been directly named in a new United States congressional report that condemns Britain for adopting what it calls “copycat censorship laws,” warning that the country’s digital regulations now pose “a direct threat” to free speech.

The document, published by the United States House Committee on the Judiciary, describes an expanding campaign led by the European Commission to impose “strict digital censorship laws” on global technology platforms.

Lawmakers in Washington identified the Online Safety Act as the clearest example of this approach spreading beyond the European Union.

The Act was introduced with the stated goal of improving online safety, but it requires platforms such as X, Reddit, and TikTok to install age verification systems and remove material deemed harmful by regulators.

US lawmakers say these provisions give the British government broad authority to dictate what can and cannot be said online.

Keep reading

The Digital Media Oligarchy: Who Owns Online News? 

When Ben Bagdikian, an esteemed journalist and early FAIR contributor, published his groundbreaking book The Media Monopoly in 1983, he painted a troubling picture of US media consolidation, reporting that 50 corporations controlled the media business. With each reprint, that number dwindled (FAIR.org6/1/87). When FAIR replicated his analysis in 2011 (Extra!10/11), it stood at 20.

Now, over 40 years after the initial release of The Monopoly Media, the media landscape has transformed drastically. Even Bagdikian’s later editions, written at the dawn of the internet, could not fully anticipate how profoundly digital technology would reconfigure the media oligarchy.

“News” is increasingly synonymous with online news. Over half the US public (56%) say that they “often” get news through their digital devices—compared to less than 1 in 3 (32%) who often get news from TV, 1 in 9 from radio and only 1 in 14 from print publications like newspapers or magazines (Pew, 9/25/25).

Which raises the question: Who owns the leading online news sites—and, by extension, largely shapes the ideas and information that reach millions of Americans?

Each month, Press Gazette, a London-based magazine for the journalism industry, ranks the top 50 news websites in the US in order of monthly visits, based on data from the marketing firm Similarweb. FAIR tallied Press Gazette’s results over a 12-month span, from December 2024 to November 2025, to get a figure for total US visits to major news sites over that period: 45.6 billion.

More than half of those visits, nearly 25.5 billion, went to news sites controlled by just seven families or corporate entities.

Keep reading

Wyoming Introduces First-Ever Foreign Censorship Shield Bill

Wyoming has taken a historic step to insulate American speech from foreign interference with the introduction of the Wyoming Guaranteeing Rights Against Novel International Tyranny and Extortion (GRANITE) Act, House Bill 0070, which would be the first US law designed to create a private right of action against foreign censorship enforcement.

Representative Daniel Singh introduced the bill, declaring that “foreign governments have decided they can threaten American citizens and American companies for speech that is protected by our Constitution…Wyoming is drawing a line in the sand.” The measure aims to establish Wyoming as a refuge for free expression and digital innovation, directly challenging what lawmakers describe as an escalating campaign of transnational censorship pressure.

The legislation provides that any Wyoming resident, business, or US person with servers in the state may sue foreign governments or international organizations that attempt to enforce censorship demands against them for First Amendment protected speech. Each violation could cost the offending entity at least $1 million or 10% of its US revenue, whichever is higher.

The GRANITE Act prohibits Wyoming courts and agencies from recognizing or enforcing foreign censorship judgments. It also forbids any state cooperation with such orders, including extradition requests or data demands linked to speech that is constitutionally protected in the US. Under the bill, no Wyoming authority may help a foreign state investigate, penalize, or prosecute individuals over lawful expression.

We obtained a copy of the bill for you here.

Keep reading

Nine Bureaucracies Walk Into Your Browser and Ask for ID

By the time you’re reading this, there’s a decent chance that somewhere, quietly and with a great deal of bureaucratic back-patting, someone is trying to figure out exactly how old you are. And not because they’re planning a surprise party.

Not because you asked them to. But because the nine horsemen of the regulatory apocalypse have decided that the future of a “safe” internet depends on everyone flashing their ID like they’re trying to get into an especially dull nightclub.

This is the nightmare of “age assurance,” a term so bloodlessly corporate you can practically hear it sighing into its own PowerPoint.

This is a sprawling, gelatinous lump of biometric estimation, document scans, and AI-ified guesswork, stitched together into one big global initiative under the cheery-sounding Global Online Safety Regulators Network, or GOSRN. Catchy.

Formed in 2022, presumably after someone at Ofcom had an especially boring lunch break, GOSRN now boasts nine national regulators, including the UK, France, Australia, and that well-known digital superpower, Fiji, who have come together to harmonize policies on how to tell whether someone is too young to look at TikTok for adults.

The group is currently chaired by Ireland’s Coimisiún na Meán.

This month, this merry band of regulators released a “Position Statement on Age Assurance and Online Safety Regulation.”

We obtained a copy of the document for you here.

Inside this gem of a document is a plan to push shared age-verification principles across borders, including support for biometric analysis, official ID checks, and the general dismantling of anonymity for the greater good of child protection.

Keep reading