EU Going To War With VPNs In Bid To “Save The Children”

Western European governments and EU bureaucrats are advancing tighter regulations on VPNs as part of a broader push for “online age verification” and their ‘Chat Control’ agenda.  Privacy advocates and digital rights groups warn that Europe is drifting towards a surveillance and censorship regime similar to internet restrictions and firewalls used by Russia and China.

Last week European Commission Executive Vice-President Henna Virkkunen suggested that Brussels may need to address the use of VPNs to bypass the EU’s upcoming age-verification systems.  Speaking during a press conference on the EU’s new digital age-verification app, Virkkunen acknowledged that users could circumvent the system with VPNs and stated that preventing such circumvention would be among the ‘next steps’ policymakers need to examine.

Her statements were delivered only two weeks after she shared a stage with EU Commission President Ursula von der Leyen, who called for a crackdown on web media companies to “protect children” from dangerous content.  The first stage of their agenda is a government created universal age verification app which web companies will be required to integrate.  Von der Leyen asserts that the new restrictions are designed to “defend children’s rights” (how does restricting access protect rights?).

The Orwellian language of the EU is not coincidental.  “Child vulnerability” is a carefully chosen vehicle to manipulate public approval, opening the door to incremental government management of online content and discourse. 

Keep reading

How a Bill Banning AI Companions for Kids Could Usher in Widespread ID Checks Online

Sen. Josh Hawley’s Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act advanced out of the Senate Judiciary committee last week. “A Trojan horse for universal online ID checks,” is how Jibran Ludwig of Fight for the Future described it.

The bill would require anyone using an AI chatbot to provide proof of identity and ban minors from interacting with many sorts of AI chatbots entirely.

Unlike some social media age verification bills, it would give parents no right to opt out of the rules the federal government sets on their kids’ technology use.

The GUARD Act is co-sponsored by Sen. Richard Blumenthal (D–Conn.), who—like Hawley—has long been a champ at moral panic around technology. (Cue: Bipartisan is just another word for really bad idea…)

And while some on the Senate Judiciary Committee expressed concerns about privacy or how this could actually backfire and harm minors, those senators still voted to advance the bill. It “easily passed in committee,” notes The Hill, despite some senators’ reservations:

Sen. Alex Padilla (D-Calif.), who voted yes, said there are concerns about “potential privacy and security risks” with the age-verification component, suggesting it may need to be “fine-tuned.”

Sen. Ted Cruz (R-Texas), who supported various kids online safety bills, said he would vote yes but noted the bill needs “some revisions.”

Cruz was concerned the bill would completely ban all AI chatbots for minors, noting their potential benefits. Hawley clarified the bill does not ban all AI chatbots for minors, but rather it “prevents AI chatbots that engage with minors from pushing sexually explicit material to the minor,” or encouraging self-harm or suicide.

Keep reading

Utah first state to hold websites liable for users who mask their location with VPNs — law goes into effect, designed to prevent bypassing age checks

Utah’s Online Age Verification Amendments, formally Senate Bill 73, take effect on May 6, making the state the first in the U.S. to explicitly target VPN use as part of age verification legislation.

Signed by Governor Spencer Cox on March 19, the controversial law establishes that a user is considered to be accessing a website from Utah if they are physically located there, regardless of whether they use a VPN or proxy to mask their IP address. It also prohibits covered websites from sharing instructions on how to use a VPN to bypass age checks.

NordVPN has called the law an “unresolvable compliance paradox” and a “liability trap,” arguing that it holds websites responsible for identifying users whose tools are specifically designed to be unidentifiable. The EFF warned that the legal risk could push sites to either ban all known VPN IPs or mandate age verification for every visitor globally.

The law is also technically flawed, given that it assumes that a web provider can reliably detect VPN traffic and determine a user’s true physical location — they can’t. IP reputation databases such as MaxMind and IP2Proxy can flag traffic from known datacenter IP ranges, but commercial VPN providers rotate addresses constantly, and residential VPN endpoints are largely indistinguishable from standard home connections. Autonomous System Number analysis can catch traffic originating from datacenter networks, but can’t identify a personal WireGuard tunnel running on a cloud VPS, for example, which routes through the same infrastructure as ordinary web hosting.

The only detection method that reliably identifies VPN protocol signatures is deep packet inspection, which analyzes traffic at the network level, not system- or app-level. China’s Great Firewall and Russia’s TSPU system deploy DPI via ISPs, but a website operator can’t because it requires access to network infrastructure that sits between the user and the server, not on the server itself.

Meanwhile, setting up a personal WireGuard instance on any major cloud provider takes minutes, meaning the law will be more likely to negatively impact non-technical users who rely on commercial VPN services for legitimate privacy: journalists, people living under authoritarian regimes, political dissidents, and abuse survivors, among others.

Keep reading

Senate Panel Backs GUARD Act, AI Age Verification Bill

The Senate Judiciary Committee voted 22-0 on Thursday to advance the GUARD Act, a bill that would require AI chatbot companies to verify the age of every American who wants to use them.

The legislation, sponsored by Senator Josh Hawley of Missouri, sailed through committee with a tweet from its author celebrating the outcome.

“My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY,” Hawley wrote on X. “No amount of profit justifies the DESTRUCTION of our children. Time to bring this bill to the Senate floor.”

As usual, the framing is about children but the result is age verification/digital ID for everyone.

Under the bill’s text, a “reasonable age verification measure” cannot mean a checkbox or a self-entered birth date. It cannot rely on whether a user shares an IP address or hardware identifier with someone already verified as an adult.

We obtained a copy of the bill for you here.

What it can mean, the legislation makes clear, is a government ID upload, a facial scan, or a financial record tied to your legal name. Every user of every covered chatbot would need to hand one of those over before being allowed in.

The bill defines an “artificial intelligence chatbot” as any service that “produces new expressive content or responses not fully predetermined by the developer or operator” and “accepts open-ended natural-language or multimodal user input.”

That language reaches well beyond the companion apps the press conference focused on. It covers service bots, search assistants powered by AI, homework helpers, and the general-purpose tools millions of adults already use without proving who they are.

Hawley described the legislation as a “targeted, tailored effort,” telling the committee, “We’re often told that this new dawning age of artificial intelligence is going to be a great age that will strengthen families and workers. I would just say that’s a choice, not an inevitability.”

Keep reading

Roblox Loses 12M Daily Users After Age ID Check Rollout

Roblox is paying for its surveillance push on users. The platform shed 12 million daily active users between Q4 2025 and Q1 2026, dropping from 144 million globally to 132 million, with the company pinning a meaningful share of the decline on its mandatory age-verification rollout.

Revenue still climbed to $1.4 billion and year-over-year DAU growth came in at 35 percent but the sequential numbers tell the story Roblox tried to bury under positive financial framing.

The fall is steeper when measured from the peak. Roblox hit 152 million daily active users in Q3 2025, meaning roughly 20 million people have stopped showing up daily since the company began demanding facial scans and identity checks to access basic chat features. The trajectory inverted almost exactly when the age checks rolled out globally in January.

Roblox’s own language gives the game away. The company says Q1 growth was “tempered by greater-than-expected headwinds” from the age-check rollout, which “slowed new user acquisition.”

Translated out of investor-speak, fewer people want to hand over biometric data or government ID to a gaming platform than Roblox’s models predicted and existing users who haven’t verified are pulling back from a service that now treats them as second-class accounts.

The verification mechanism deserves a closer look than corporate filings tend to give it. Roblox runs facial age estimation, a system that scans users’ faces to guess how old they are and supplements that with identity verification documents.

Facial scanning of a user base that skews young, with a substantial portion under 13, means the company is processing biometric data from millions of children. Roblox says this is for safety. The system being constructed is a database of face scans tied to platform identities, retained on terms the company has not publicly defined.

Earlier this month, Roblox widened the restrictions to gate game access by age bracket and it has signaled more changes ahead. The company plans to “implement additional improvements designed to facilitate age-appropriate access to content and product features” over coming quarters, and has openly said its safety push will lower Roblox’s “expectations for topline growth in 2026.”

Full-year revenue guidance dropped to 20 to 25 percent growth, down from 22 to 26 percent. Bookings guidance was cut by nearly $1 billion. Wall Street responded by knocking the stock down a whopping 20 percent.

The verification numbers themselves point to a two-tier platform taking shape. Through the end of Q1, 51 percent of global daily active users had completed age checks, with US adoption running at 65 percent.

The other half of the user base is interacting with a degraded version of Roblox where communication is restricted, certain games are off-limits and the path back to full functionality runs through a face scan or an ID upload. It’s a tollgate and the toll is biometric data.

Keep reading

Meta raises specter of shutting down service to New Mexico in legal clash over child safety

Meta is raising the prospect of shutting down its social media services in New Mexico in response to a push by state prosecutors for fundamental changes to the company’s platforms, including Instagram, to protect the mental health and safety of children.

The possibility emerged amid legal gamesmanship in the runup to a bench trial next week on allegations that Meta poses a public nuisance. It’s the second phase of a case that already resulted in $375 million in civil penalties on a jury’s determination that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms.

Prosecutors are asking the court to order a series of changes to child accounts on social media aimed at reining in addictive features, improving age verification and preventing child sexual exploitation through default privacy settings and closer oversight.

Meta executives have emphasized that the company continuously improves child safety and addresses compulsive social media use. The company says its being singled out among hundreds of apps that teens use.

In a court filing unsealed Thursday, Meta said it was unfeasible for the company to meet a proposed requirement for 99% accuracy in verifying that child users are at least 13 years old, among other demands.

“As a practical matter, this requirement effectively requires Meta to shut down its services — for all users in the state — or else comply with impossible obligations,” Meta said in the filing.

Such a shutdown across a population of 2.1 million residents in New Mexico could silence personal communication on Meta’s immensely popular platforms, which also include Facebook and WhatsApp, and also impact their use for commercial advertising.

By withdrawing from New Mexico, Meta would satisfy any concerns about harm to children, but the message could appear intentionally hostile and might lead to unintended consequences, said Eric Goldman, codirector of the High Tech Law Institute at Santa Clara University School of Law in California.

Keep reading

Australia’s Under-16 Social Media Ban Fails: 73% Ignore It

Australia’s under-16 social media ban has been in force for four months and the headline finding from a new working paper out of the University of Chicago’s Becker Friedman Institute is that around three-quarters of the teenagers it targets are ignoring it.

The paper, “Why Bans Fail: Tipping Points and Australia’s Social Media Ban,” surveyed 746 Australian teenagers between March and April 2026. Among 14- and 15-year-olds covered by the ban, only about 27% are complying. The other 73% are still using Facebook, Instagram, Snapchat, TikTok, X, YouTube, Reddit, Twitch, Threads, or Kick, the ten platforms the law designates off-limits to anyone under 16.

The Online Safety Amendment (Social Media Minimum Age) Act 2024 took effect on 10 December 2025, making Australia the first country to outlaw teenage social media accounts at the federal level.

More than a dozen other countries and numerous US states are now considering versions of the same approach. The Australian model places enforcement entirely on the platforms, which face penalties of up to A$49.5 million for failing to take “reasonable steps” to keep under-16s off their services. Teenagers themselves face no legal sanction.

The teenagers know this. According to the survey, only 22% of banned teens believe they personally face any consequence for using a banned platform.

47% correctly understand that the consequences fall on the companies. Awareness of the ban is near-universal at 86%. The teens aren’t confused about what the law says. They’ve simply concluded, accurately, that the law isn’t aimed at them.

Getting around the restrictions takes minimal effort. 75% of banned teens describe circumvention as easy or very easy.

The most common workarounds are the obvious ones: lying about age on verification prompts (57%), entering false birthdates at sign-up (44%), borrowing a parent’s or older sibling’s account (42%), and routing through a VPN (30%). 64% of 14- and 15-year-olds in the survey have not had their accounts removed at all. The platforms haven’t found them. A quarter of non-compliers report that a parent, older sibling, or other adult helped them sign up for a new account after a previous one was deactivated.

The researchers also asked teenagers a more interesting question. What share of your peers would need to stop using social media before you stopped? The average answer was 69%. Some teens placed the threshold even higher. The result holds across every way the question was framed, whether the reference group was age peers, classmates, the wider school, or “a typical person your age.” The numbers came out between 62% and 69% in every variant.

Keep reading

Senate Panel Backs GUARD Act, AI Age Verification Bill

The Senate Judiciary Committee voted 22-0 on Thursday to advance the GUARD Act, a bill that would require AI chatbot companies to verify the age of every American who wants to use them.

The legislation, sponsored by Senator Josh Hawley of Missouri, sailed through committee with a tweet from its author celebrating the outcome.

“My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY,” Hawley wrote on X. “No amount of profit justifies the DESTRUCTION of our children. Time to bring this bill to the Senate floor.”

As usual, the framing is about children but the result is age verification/digital ID for everyone.

Under the bill’s text, a “reasonable age verification measure” cannot mean a checkbox or a self-entered birth date. It cannot rely on whether a user shares an IP address or hardware identifier with someone already verified as an adult.

We obtained a copy of the bill for you here.

What it can mean, the legislation makes clear, is a government ID upload, a facial scan, or a financial record tied to your legal name. Every user of every covered chatbot would need to hand one of those over before being allowed in.

The bill defines an “artificial intelligence chatbot” as any service that “produces new expressive content or responses not fully predetermined by the developer or operator” and “accepts open-ended natural-language or multimodal user input.”

That language reaches well beyond the companion apps the press conference focused on. It covers customer service bots, search assistants powered by AI, homework helpers, and the general-purpose tools millions of adults already use without proving who they are.

Keep reading

UK Gov’t Promises More Social Media “Restrictions”

While embattled PM Sir Keir Starmer takes a pointless grilling on the even more pointless existence of Peter Mandelson, other members of his cabinet were busily paving the way for the next construction phase of our increasingly dystopian society.

Speaking to Sky News earlier today, Education Secretary Bridget Phillipson promised

“more action to keep young people safe online, including around social media”.

Which is delightfully vague.

Education Minister Olivia Bailey kept her cards similarly close to her chest, whilst trying to sound forceful:

“It is a question of how we act, not if, but to put this beyond any doubt, we are placing a clear statutory requirement that the Secretary of State ‘must’, rather than ‘may’, act […] We are clear that under any outcome, we will impose some form of age or functionality restrictions for children under 16.”

So we know they’re going to do something…we just don’t know what. And, if I had to guess, neither do Bridget or Olivia. Neither seems like the kind of people that get kept in the loop, and that flavour of waffle is usually the reserve of those who have no idea what’s going on.

Many commenters – both for and against – have interpreted this promised action as an Australia-style social media ban for children. Certainly, that’s what Conservative MP Laura Trott seems to think in her champagne-popping tweet:

…but the signs might be pointing in another direction.

After all, the Social Media Ban is practically on the books. It was introduced as an amendment to the Children’s Wellbeing and Schools bill, and has already passed the Lords four times. It could have become law already, but Ministers and MPs have repeatedly overturned the vote, declaring the need for further consultation.

Then, earlier today and coinciding with this government pledge to take action, the Independent published a report that suggests Australia’s social media ban doesn’t work.

Two thirds of Australian teens still using social media despite under-16s ban

Keep reading

Louisiana Lawmakers Pass Bill To Send People To Jail For Smoking Marijuana Near College Campuses

Louisiana lawmakers have approved a bill that threatens to send people to jail for up to one year if they smoke marijuana within 2,000 feet of a school property—including a college campus.

The legislation from Rep. Gabe Firment (R) was passed by the House of Representatives in a 59-34 vote last week.

HB 568, which now heads to the Senate for consideration, applies to people who violate drug laws “while smoking, vaping, or otherwise abusing such controlled dangerous substance while on any property used for school purposes by any school, within two thousand feet of any such property, or while on a school bus.”

The pro-legalization Marijuana Policy Project (MPP) said the “incredibly draconian penalties” in the legislation threaten to reverse cannabis reform progress made in the state in recent years.

In 2021, then-Gov. John Bel Edwards (D) signed a bill decriminalizing marijuana by removing the threat of jail time for possessing up to 14 grams.

“HB 568 would make cannabis use a felony in huge swaths of urban and suburban areas. Two thousand feet is a little over ⅓ of a mile,” Kevin Caldwell, MPP”s Southeast legislative manager, said in an action alert to supporters. “In addition to mandatory incarceration of up to a year, the bill includes a fine of up to $1,000.”

“This is an attempt to bring back the draconian penalties that Louisiana was infamous for in decades past. This bill seeks to undo years of hard work by advocates for ending jail time for minor cannabis offenses,” he said. “Under this legislation, a student could be incarcerated for a year for consuming in a college dorm room.”

Keep reading