Britain’s Business Registry Left Director Data Wide Open — Yet the Government Is Still Building a National Digital ID

Companies House in the UK briefly turned its own corporate register into a self-service fraud toolkit. A vulnerability in the dashboard of the UK’s official business registry let anyone access other companies’ private records by pressing the back button, no hacking required.

Directors’ home addresses, email addresses, and dates of birth were all sitting there, readable and editable by anyone who knew where to look.

Companies House is the government body where every limited company must register to legally exist. It holds the official record of who runs Britain’s businesses, including the personal details of every director. When you incorporate a company in the UK, your information goes into this register. There is no opt-out.

The timing is what makes this even more interesting. Since November 2025, all directors in the UK have been legally required to verify their identity through GOV.UK One Login to act in their roles, feeding passport scans, biometric data, and government credentials into the same Companies House infrastructure.

Keep reading

Florida Gives Tech Platforms Deadline for Age ID Checks

Florida’s attorney general has handed tech companies an ultimatum: build identity verification systems into your platforms by April 8, or his office starts filing lawsuits.

The deadline comes as a federal appeals court hears arguments this week on whether the state can legally force millions of users to prove who they are before accessing social media.

The law driving this, HB 3, bans anyone under 14 from social media entirely and requires parental consent for 14- and 15-year-olds. It also forces adult content sites to verify visitors are 18 or older.

Attorney General James Uthmeier gave tech companies 30 days to implement age restrictions and 60 days to deploy parental consent mechanisms. “It is the law of the land,” he said at an Orlando event on March 9. Non-compliance means litigation.

What Florida is actually mandating is a digital ID checkpoint at the entrance to the internet. The law doesn’t specify which verification methods qualify as “reasonable.” It doesn’t cap how long platforms can retain identity documents. It doesn’t limit what platforms can do with the surveillance infrastructure once it’s built. Florida gets the policy win.

Users hand over their documents. The data sits in corporate systems indefinitely, available for breaches, subpoenas, and purposes nobody has disclosed yet.

Uthmeier even named TikTok and Discord specifically. Discord’s attempt to introduce digital ID age verification has been met with much backlash, especially after a leak over over 70,000 government IDs. Uthmeier appears unconcerned.

NetChoice, co-plaintiff in the legal challenge, named this directly: the law creates a security risk by “mandating the surrender of sensitive information.” That’s the part Florida’s child-protection framing is designed to obscure. Every minor blocked from TikTok requires millions of adults to first prove they aren’t minors. The verification burden falls on everyone.

Keep reading

Online age-verification tools spread across U.S. for child safety, but adults are being surveilled

New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content, leading to backlash from users and criticism from privacy advocates that a free and open internet is at stake. Roughly half of U.S. states have enacted or are advancing laws requiring platforms — including adult content sites, online gaming services, and social media apps — to block underage users, forcing companies to screen everyone who approaches these digital gates.

“There’s a big spectrum,” said Joe Kaufmann, global head of privacy at Jumio, one of the largest digital identity-verification and authentication platforms. He explained that the patchwork of state laws vary in technical demands and compliance expectations. “The regulations are moving in many different directions at once,” he said.  

Social media company Discord announced plans in February to roll out mandatory age verification globally, which the company said would rely on verification methods designed so facial analysis occurs on a user’s device and submitted data would be deleted immediately. The proposal quickly drew backlash from users concerned about having to submit selfies or government IDs to access certain features, which led Discord to delay the launch until the second half of this year.

“Let me be upfront: we knew this rollout was going to be controversial. Any time you introduce something that touches identity and verification, people are going to have strong feelings,” Discord chief technology officer and co-founder Stanislav Vishnevskiy wrote in a Feb. 24 blog post.

Websites offering adult content, gambling, or financial services often rely on full identity verification that requires scanning a government ID and matching it to a live image. But most of the verification systems powering these checkpoints — often run by specialized identity-verification vendors on behalf of websites — rely on artificial intelligence such as facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone is old enough to access content. Social media and lower-risk services may use lighter estimation tools designed to confirm age without permanently storing detailed identity records.  

Vendors say a challenge is balancing safety with how much friction users will tolerate. “We’re in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible,” said Rivka Gewirtz Little, chief growth officer at identity-verification platform Socure. Excessive data collection, she added, creates friction that users resist. 

Keep reading

UK Parliament Plans ISP Blocking and Age Verification Powers

If you wanted a case study in how modern democracies widen state oversight step by step, Britain has offered a clear example. On March 9, two major surveillance-related bills advanced through Parliament, each pointing toward broader government authority, reduced personal privacy, and tighter limits on protest activity.

These measures advanced through procedural votes and technical amendments that sounded administrative, yet carry consequences for how millions of people use the internet and exercise civic rights.

The main legislative action unfolded in the House of Commons during debate on the Children’s Wellbeing and Schools Bill. Members of Parliament actually rejected amendments from the House of Lords that would have required age verification for VPNs and certain user-to-user services.

But don’t get too excited. Replacement amendments approved by MPs would grant significant new authority to the state. The powers allow the government to require internet service providers to block or restrict children’s access to specific online platforms, impose time-of-day limits on when services can be used, and mandate age verification across nearly any platform that enables users to post or share content.

Keep reading

Australia’s “eSafety” Commissioner Threatens App Stores Over AI Age Verification Deadline

Australia’s eSafety Commissioner Julie Inman Grant is threatening to go after app stores and search engines unless they block AI services that haven’t verified their users’ ages by March 9, 2026.

The ultimatum landed after a Reuters took it upon itself to survey 50 leading text-based AI platforms, and found that 30 of them had taken no visible steps toward compliance with the country’s controversial censorship and surveillance ideas.

“eSafety will use the full range of our powers where there is non-compliance,” a spokesperson said, spelling out that this extends to “action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services.”

What’s actually being built here is bigger than age verification. Five industry codes taking effect March 9 under Australia’s Online Safety Act 2021 impose age-gating requirements across a wide range of services: AI platforms, app distribution services, social media, gaming, dating apps, and any website deemed high-risk for pornography, extreme violence, or self-harm content.

Every category gets its own code. Each non-compliance carries fines of up to A$49.5 million (around US$35 million). The system isn’t aimed at one corner of the internet. It covers most of it.

The age verification requirement doesn’t stand alone. Under a separate amendment to the Online Safety Act passed last year, social media platforms must already ban users under 16 entirely.

The March 9 codes extend that logic further, requiring services to verify the identity of users and filter what they can see based on age. The infrastructure being assembled connects age to identity to content access across the internet as Australians currently use it.

Keep reading

Europe Is Building a Digital Identity System for 450 Million People

The European Union is quietly constructing what may become one of the most sweeping digital identity systems ever attempted. Under new legislation, every EU member state must provide citizens with a government-approved “European Digital Identity Wallet” by 2026. This system will allow people to store official documents, verify identity, access government services, sign legal contracts, and potentially interact with financial institutions through a single digital platform. It is being marketed as a modernization effort designed to make life easier for citizens navigating an increasingly digital economy.

Supporters claim the digital wallet will simply replace physical paperwork. Instead of carrying passports, driver’s licenses, or other credentials, individuals will be able to verify their identity online with a government-issued digital key. The European Commission argues that this will streamline bureaucracy and allow citizens to interact with both public and private services more efficiently across all 27 member states.

Yet the implications extend far beyond administrative convenience. Once identity becomes centralized within a digital framework controlled or approved by government authorities, participation in everyday life increasingly depends on that system. Access to banking, employment verification, healthcare services, travel documentation, and legal contracts can all be integrated into the same identity infrastructure. What begins as a convenience quickly becomes a gateway through which access to modern society is managed.

Governments have always maintained population registries in one form or another. What makes digital identity systems fundamentally different is the speed and scale at which they operate. When identification becomes digitized and interconnected across borders, the ability to monitor economic and social activity expands dramatically. Identity verification can occur instantly, records can be updated in real time, and information can be shared between institutions with unprecedented efficiency.

This development becomes even more significant when viewed alongside other technological initiatives currently underway in Europe. The European Central Bank continues to explore the creation of a digital euro, a central bank digital currency that would exist entirely within electronic financial systems. If digital identity platforms and digital currency systems eventually intersect, financial activity and identity verification could become closely linked within the same infrastructure.

Keep reading

UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech

The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.

The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.

The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.

The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.

Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”

The actual policy tools being considered are a different matter.

Age verification, as a mechanism, works by proving identity. Every user proves who they are.

A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.

The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.

Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.

Keep reading

FTC Says Companies Can Collect Kids’ Personal Data, As Long As It’s Called “Age Verification”

The FTC just told companies they can collect children’s personal data without parental consent, as long as it’s for “age verification.”

That’s the practical effect of a policy statement the agency issued this week. Under COPPA, websites collecting data on kids under 13 generally need verifiable parental consent first. The FTC’s new statement carves out an exception: gather whatever personal information you need to verify someone’s age, and the Commission won’t come after you for it.

The agency calls this child protection. The infrastructure it’s enabling looks different.

Christopher Mufarrige, director of the FTC’s Bureau of Consumer Protection, said “Age verification technologies are some of the most child-protective technologies to emerge in decades,” and framed the announcement as a tool for parents.

What the statement actually does is green-light personal data collection from minors, on the theory that knowing someone’s age requires knowing who they are first.

The exemption is conditional. To avoid enforcement, sites must delete age verification data “promptly” after use, restrict third-party sharing to vendors with adequate security assurances, post clear notices about what they’re collecting, and use methods likely to produce “reasonably accurate” results. These requirements are unverifiable by the people whose data gets collected, and enforced by an agency that just announced it won’t enforce.

COPPA supposedly exists precisely because children’s personal data is sensitive and companies can’t be trusted to protect it without legal pressure.

The FTC’s new exemption uses that same sensitive data as the price of admission for age verification, then steps back from enforcement. The agency is weakening the law’s protections in order to expand the infrastructure that the law was supposedly designed to regulate.

Keep reading

Apple Rolls Out Age Verification to UK iPhone Users Under Online Safety Act

Apple is now starting to demand age verification from UK iPhone users, and the latest iOS 26.4 beta makes clear what’s at stake for anyone who declines.

The move is a direct consequence of the UK’s Online Safety Act, a censorship law that has also forces platforms to check the identity/age eligibility of every adult user or face fines reaching 10% of global revenue.

The law is controversial but British Prime Minister Keir Starmer says it doesn’t go far enough.

A prompt appears after installation asking users to confirm they’re over 18. Refuse, and Apple says users “will not be able to download and purchase apps or make in-app purchases.”

The verification process gives Apple several ways to build a profile of your age. It can pull from the payment method already linked to your account, use account age as a proxy, or ask you to scan a credit card. Some users may eventually be asked to scan a photo ID. Apple frames this as seamless.

Keep reading

Privacy Groups Revolt Against Google’s Demand to Register Every Android Developer

Android’s defining advantage over iOS has always been openness. You could build an app, distribute it yourself, and never touch Google’s systems. That era is about to end unless the open-source community can force Google to back down.

Starting September 2026, any app installed on a certified Android device must be registered by a Google-verified developer. No registration, no installation. The verification demands government-issued identification, agreement to Google’s terms and conditions, and a $25 fee.

Developers who skip Google’s approval process will find their apps blocked, even when distributed entirely outside Google Play, through stores like F-Droid, the Amazon Appstore, or Samsung’s Galaxy Store.

Organizations, including the Electronic Frontier Foundation, the Free Software Foundation, F-Droid, Article 19, Fastmail, and Vivaldi, signed an open letter calling on Alphabet CEO Sundar Pichai, founders Larry Page and Sergey Brin, and app ecosystem chief Vijaya Kaza to kill the policy. Their message is simple: Google is reaching into distribution channels it doesn’t own, doesn’t operate, and has no legitimate authority over.

Keep reading