UK Parliament Plans ISP Blocking and Age Verification Powers

If you wanted a case study in how modern democracies widen state oversight step by step, Britain has offered a clear example. On March 9, two major surveillance-related bills advanced through Parliament, each pointing toward broader government authority, reduced personal privacy, and tighter limits on protest activity.

These measures advanced through procedural votes and technical amendments that sounded administrative, yet carry consequences for how millions of people use the internet and exercise civic rights.

The main legislative action unfolded in the House of Commons during debate on the Children’s Wellbeing and Schools Bill. Members of Parliament actually rejected amendments from the House of Lords that would have required age verification for VPNs and certain user-to-user services.

But don’t get too excited. Replacement amendments approved by MPs would grant significant new authority to the state. The powers allow the government to require internet service providers to block or restrict children’s access to specific online platforms, impose time-of-day limits on when services can be used, and mandate age verification across nearly any platform that enables users to post or share content.

Keep reading

Australia’s “eSafety” Commissioner Threatens App Stores Over AI Age Verification Deadline

Australia’s eSafety Commissioner Julie Inman Grant is threatening to go after app stores and search engines unless they block AI services that haven’t verified their users’ ages by March 9, 2026.

The ultimatum landed after a Reuters took it upon itself to survey 50 leading text-based AI platforms, and found that 30 of them had taken no visible steps toward compliance with the country’s controversial censorship and surveillance ideas.

“eSafety will use the full range of our powers where there is non-compliance,” a spokesperson said, spelling out that this extends to “action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services.”

What’s actually being built here is bigger than age verification. Five industry codes taking effect March 9 under Australia’s Online Safety Act 2021 impose age-gating requirements across a wide range of services: AI platforms, app distribution services, social media, gaming, dating apps, and any website deemed high-risk for pornography, extreme violence, or self-harm content.

Every category gets its own code. Each non-compliance carries fines of up to A$49.5 million (around US$35 million). The system isn’t aimed at one corner of the internet. It covers most of it.

The age verification requirement doesn’t stand alone. Under a separate amendment to the Online Safety Act passed last year, social media platforms must already ban users under 16 entirely.

The March 9 codes extend that logic further, requiring services to verify the identity of users and filter what they can see based on age. The infrastructure being assembled connects age to identity to content access across the internet as Australians currently use it.

Keep reading

Europe Is Building a Digital Identity System for 450 Million People

The European Union is quietly constructing what may become one of the most sweeping digital identity systems ever attempted. Under new legislation, every EU member state must provide citizens with a government-approved “European Digital Identity Wallet” by 2026. This system will allow people to store official documents, verify identity, access government services, sign legal contracts, and potentially interact with financial institutions through a single digital platform. It is being marketed as a modernization effort designed to make life easier for citizens navigating an increasingly digital economy.

Supporters claim the digital wallet will simply replace physical paperwork. Instead of carrying passports, driver’s licenses, or other credentials, individuals will be able to verify their identity online with a government-issued digital key. The European Commission argues that this will streamline bureaucracy and allow citizens to interact with both public and private services more efficiently across all 27 member states.

Yet the implications extend far beyond administrative convenience. Once identity becomes centralized within a digital framework controlled or approved by government authorities, participation in everyday life increasingly depends on that system. Access to banking, employment verification, healthcare services, travel documentation, and legal contracts can all be integrated into the same identity infrastructure. What begins as a convenience quickly becomes a gateway through which access to modern society is managed.

Governments have always maintained population registries in one form or another. What makes digital identity systems fundamentally different is the speed and scale at which they operate. When identification becomes digitized and interconnected across borders, the ability to monitor economic and social activity expands dramatically. Identity verification can occur instantly, records can be updated in real time, and information can be shared between institutions with unprecedented efficiency.

This development becomes even more significant when viewed alongside other technological initiatives currently underway in Europe. The European Central Bank continues to explore the creation of a digital euro, a central bank digital currency that would exist entirely within electronic financial systems. If digital identity platforms and digital currency systems eventually intersect, financial activity and identity verification could become closely linked within the same infrastructure.

Keep reading

UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech

The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.

The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.

The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.

The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.

Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”

The actual policy tools being considered are a different matter.

Age verification, as a mechanism, works by proving identity. Every user proves who they are.

A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.

The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.

Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.

Keep reading

FTC Says Companies Can Collect Kids’ Personal Data, As Long As It’s Called “Age Verification”

The FTC just told companies they can collect children’s personal data without parental consent, as long as it’s for “age verification.”

That’s the practical effect of a policy statement the agency issued this week. Under COPPA, websites collecting data on kids under 13 generally need verifiable parental consent first. The FTC’s new statement carves out an exception: gather whatever personal information you need to verify someone’s age, and the Commission won’t come after you for it.

The agency calls this child protection. The infrastructure it’s enabling looks different.

Christopher Mufarrige, director of the FTC’s Bureau of Consumer Protection, said “Age verification technologies are some of the most child-protective technologies to emerge in decades,” and framed the announcement as a tool for parents.

What the statement actually does is green-light personal data collection from minors, on the theory that knowing someone’s age requires knowing who they are first.

The exemption is conditional. To avoid enforcement, sites must delete age verification data “promptly” after use, restrict third-party sharing to vendors with adequate security assurances, post clear notices about what they’re collecting, and use methods likely to produce “reasonably accurate” results. These requirements are unverifiable by the people whose data gets collected, and enforced by an agency that just announced it won’t enforce.

COPPA supposedly exists precisely because children’s personal data is sensitive and companies can’t be trusted to protect it without legal pressure.

The FTC’s new exemption uses that same sensitive data as the price of admission for age verification, then steps back from enforcement. The agency is weakening the law’s protections in order to expand the infrastructure that the law was supposedly designed to regulate.

Keep reading

Apple Rolls Out Age Verification to UK iPhone Users Under Online Safety Act

Apple is now starting to demand age verification from UK iPhone users, and the latest iOS 26.4 beta makes clear what’s at stake for anyone who declines.

The move is a direct consequence of the UK’s Online Safety Act, a censorship law that has also forces platforms to check the identity/age eligibility of every adult user or face fines reaching 10% of global revenue.

The law is controversial but British Prime Minister Keir Starmer says it doesn’t go far enough.

A prompt appears after installation asking users to confirm they’re over 18. Refuse, and Apple says users “will not be able to download and purchase apps or make in-app purchases.”

The verification process gives Apple several ways to build a profile of your age. It can pull from the payment method already linked to your account, use account age as a proxy, or ask you to scan a credit card. Some users may eventually be asked to scan a photo ID. Apple frames this as seamless.

Keep reading

Privacy Groups Revolt Against Google’s Demand to Register Every Android Developer

Android’s defining advantage over iOS has always been openness. You could build an app, distribute it yourself, and never touch Google’s systems. That era is about to end unless the open-source community can force Google to back down.

Starting September 2026, any app installed on a certified Android device must be registered by a Google-verified developer. No registration, no installation. The verification demands government-issued identification, agreement to Google’s terms and conditions, and a $25 fee.

Developers who skip Google’s approval process will find their apps blocked, even when distributed entirely outside Google Play, through stores like F-Droid, the Amazon Appstore, or Samsung’s Galaxy Store.

Organizations, including the Electronic Frontier Foundation, the Free Software Foundation, F-Droid, Article 19, Fastmail, and Vivaldi, signed an open letter calling on Alphabet CEO Sundar Pichai, founders Larry Page and Sergey Brin, and app ecosystem chief Vijaya Kaza to kill the policy. Their message is simple: Google is reaching into distribution channels it doesn’t own, doesn’t operate, and has no legitimate authority over.

Keep reading

Xbox UK Age Verification Launch Locks Out Thousands of Players

Xbox’s mandatory age verification rollout in the UK was a disaster almost immediately, locking thousands of players out of games, voice chat, and apps like Discord with no clear path back in.

The failures started overnight. Players report being ejected mid-session to complete age verification checks that then took hours, stalled indefinitely, or simply refused to work regardless of what identification they submitted.

Government ID, mobile numbers, and live video age estimation; the system rejected them all for many users. Others made it through verification only to find their accounts still restricted with no explanation and no recourse beyond contacting Xbox support.

Microsoft’s support page now carries a notice confirming it is “aware of the issue and working to fix it.” That’s the extent of the official guidance.

The verification requirement exists to comply with the UK’s new censorship law, the Online Safety Act, legislation mandating that platforms facilitating online communication verify user ages. The actual system XBox built to deliver that compliance forcibly disconnected players from games in progress, stripped away chat functionality with anyone outside their friends list, and blocked access to third-party services.

Users who have held Xbox accounts for over 18 years found themselves flagged for verification anyway. The system doesn’t consider account age, history, or any contextual signal that might indicate an adult user. Everyone gets treated as potentially underage until they hand over documentation.

“The amount of times I’ve tried to do any method of the verification tonight is stupid,” wrote one user. “Can’t change privacy settings on my Xbox to allow me to see mods on games too. Can’t chat on Discord. Utterly broken.”

“Been trying to verify my ID for the past few hours,” added another. “It finally worked but I can’t access anything still. No Discord access at all.”

Keep reading

Germany’s SPD Pushes Mandatory Government ID Verification for Social Media

The SPD of Germany wants to end anonymous social media access in Germany.

Tim Klüssendorf, Secretary General of the Social Democratic Party, confirmed this week that his party is pushing mandatory age verification for all social media platforms, tied directly to the EU Digital Identity Wallet, the bloc’s official government ID scheme.

He’s already in talks with coalition partner CDU, Chancellor Friedrich Merz’s party, which called for an end to online anonymity just last week. Both parties now want the same thing.

Naturally, Klüssendorf framed the proposal as child protection. “We are currently not meeting the state’s obligation to protect. I believe children and young people are particularly at risk there. That has been proven,” he said after an SPD leadership meeting in Berlin.

The platforms, he added, are currently “operating a business model that is simply not compatible with our democratic principles.”

The SPD’s formal position, adopted in an internal policy paper, breaks access into three tiers by age. Under-14s would face a complete ban from social media platforms. Under-16s could access only state-approved “youth versions,” stripped of algorithmic recommendation, infinite scroll, autoplay, and engagement reward systems. For everyone 16 and older, including adults, algorithmic content recommendations would be switched off by default. Want the algorithm? You’d have to actively opt in.

The proposal sounds measured. It isn’t. Mandatory EUDI Wallet verification means linking your social media account to a government-issued digital identity before you can post, scroll, or log in.

Every platform interaction becomes traceable to a verified real-world identity. Klüssendorf acknowledged the data tension, insisting the SPD wants “a very data-minimising solution that is also in the hands of state regulation” rather than handing platforms more user data to monetize.

The EUDI Wallet architecture, at least in theory, allows age confirmation without transmitting full identity details. Whether that promise survives contact with implementation is a different question.

Keep reading

LA County Sues Roblox Over False Child Safety Claims and Lack of Age Verification

Los Angeles County filed a lawsuit against Roblox, alleging the platform has built a system that leaves children exposed to grooming because it does not go far enough in checking user IDs to prove their age.

The suit names the company for public nuisance and violations of California’s false advertising law.

We obtained a copy of the complaint for you here.

The complaint is direct: “Roblox portrays its platform as a safe and appropriate place for children to play. In reality, and as Roblox well knows, the design of its platform makes children easy prey for pedophiles.”

If you weren’t aware of how big Roblox is and why this is important, Roblox serves roughly 144 million daily active users. That’s more than both Fortnite and the entire userbase of the Steam platform combined.

The platform also lets people create and play games, chat through customizable avatars, and spend real money on virtual currency.

LA County’s suit argues Roblox has consistently failed to moderate user-generated content, enforce its own age restrictions, or honestly disclose the risks predators pose to children using the service.

There is no doubt the platform’s moderation gaps have attracted scrutiny for years, and that the platform has had issues with grooming of minors, but the LA lawsuit is the latest in a pattern of governments and researchers documenting the same problem Roblox has repeatedly said it’s addressing, and the latest attempt to mandate digital ID checks.

Roblox rejected the suit’s allegations. A company spokesman said the platform was built “with safety at its core” and pointed to existing protections: “We have advanced safeguards that monitor our platform for harmful content and communications, and users cannot send or receive images via chat, avoiding one of the most prevalent opportunities for misuse seen elsewhere online.”

The company added that it takes action against rule violators and cooperates with law enforcement, closing with: “There is no finish line when it comes to protecting kids and, while no system can be perfect, our commitment to safety never ends.”

The false advertising angle is what is most important to note. LA isn’t suing Roblox over what it collects or who can see it. The county is suing because the company told parents the platform was safe for kids while allegedly knowing otherwise.

Keep reading