The App Store Accountability Act Is A Privacy Nightmare Disguised As Child Protection

Washington has discovered a familiar political trick: wrap a flawed policy in the language of protecting children and hope nobody reads the fine print. The latest example is the App Store Accountability Act, a bill championed by lawmakers who appear eager to regulate the internet without understanding how it actually works.

Supporters insist the legislation will protect kids online. In reality, it risks undermining privacy, violating constitutional protections, and creating a cybersecurity disaster in the process.

And remarkably, Congress is pushing forward with this even though federal courts have already signaled that this exact regulatory model is unconstitutional.

The App Store Accountability Act would require app stores to verify the ages of every user and share age information with app developers. On paper, that sounds straightforward. In practice, it would force companies to collect massive amounts of sensitive personal data simply to download everyday apps.

Want to download a weather app? Verify your age.

Want to install a calculator? Verify your age.

Want to read the news? Verify your age.

The practical result is obvious: app stores would be compelled to gather highly sensitive identity data on tens of millions of Americans and then distribute that information to countless third-party developers.

This could be one of the largest digital identity honeypots ever conceived.

Security experts have been warning about this for months. In fact, 419 cybersecurity and privacy academics from 30 countries recently signed an open letter warning that large-scale age verification systems are “dangerous and socially unacceptable” because they create enormous new attack surfaces for hackers and data thieves.

The logic is simple. If every app download requires age verification, that means sensitive identity data must be stored, transmitted, and accessed across thousands of services. Instead of limiting the spread of personal information, the bill effectively multiplies it.

For cybercriminals, it would be a dream target.

Keep reading

UK Parliament Plans ISP Blocking and Age Verification Powers

If you wanted a case study in how modern democracies widen state oversight step by step, Britain has offered a clear example. On March 9, two major surveillance-related bills advanced through Parliament, each pointing toward broader government authority, reduced personal privacy, and tighter limits on protest activity.

These measures advanced through procedural votes and technical amendments that sounded administrative, yet carry consequences for how millions of people use the internet and exercise civic rights.

The main legislative action unfolded in the House of Commons during debate on the Children’s Wellbeing and Schools Bill. Members of Parliament actually rejected amendments from the House of Lords that would have required age verification for VPNs and certain user-to-user services.

But don’t get too excited. Replacement amendments approved by MPs would grant significant new authority to the state. The powers allow the government to require internet service providers to block or restrict children’s access to specific online platforms, impose time-of-day limits on when services can be used, and mandate age verification across nearly any platform that enables users to post or share content.

Keep reading

Roblox Introduces AI System That Rewrites Users’ Chat Messages in Real Time

Roblox has started rewriting its users’ chat messages in real time using AI, altering what people actually typed into something the platform considers more appropriate.

The feature, rolling out now, goes further than the existing filter that replaces flagged words with “#” symbols. Under the new system, banned language gets silently reworded into what Roblox calls “more respectful language that remains closer to the user’s original intent.”

The platform’s example: type “Hurry TF up!” and the message your recipient sees reads “Hurry up!” Roblox says everyone in the chat is notified when this happens, though the person who typed the original message has no way to stop the substitution before it goes out.

The definition of “banned language” extends beyond profanity. It covers “misspellings, special characters, or other methods to evade detection of profanity,” meaning the AI is also tasked with catching deliberate workarounds and rewriting those too.

Roblox is simultaneously expanding its text filtering system to “detect more variations of language that break its Community Standards,” so the net is getting wider at the same time, and the consequences of being caught in it are changing.

What Roblox has built is a system that goes beyond blocking speech. It replaces it. The message that leaves your keyboard is not the message that arrives. The recipient reads words you didn’t choose, attributed to you, with a notification that your original phrasing was deemed unacceptable. The platform decides what you said.

Keep reading

TikTok Says Privacy Makes Users Less Safe

Over the past five years, the largest social platforms settled on a clear position about private messaging. Lock it down. Facebook turned on end-to-end encryption. Instagram and Messenger did the same. X joined the club. Yes, metadata is still an issue and the protocols used matter; but, generally speaking, the move was toward more privacy of actual messages.

TikTok looked at that trend and made a different choice. Then it scheduled a briefing in London with the BBC to explain the reasoning.

The explanation was safety.

In the UK, TikTok belongs to ByteDance, a Chinese technology company that operates under Beijing’s jurisdiction. China maintains strict limits on end-to-end encryption inside its borders. TikTok, after its own review of the issue, reached the same policy outcome for its messaging system.

Alan Woodward, a cybersecurity professor at Surrey University, raised that point directly. The company’s “Chinese influence might be behind the decision,” he said, adding that end-to-end encryption is “largely banned in China.”

TikTok declined to engage with that suggestion, of course. The remark hung in the air. However, it’s worth adding that the US operation of TikTok has made no indication that it is moving towards private messaging standards either.

End-to-end encryption is simple in theory. Only the people in a conversation can read the messages. The platform running the service cannot access the content. Governments cannot request it. Engineers inside the company cannot view it.

TikTok’s system operates in a different way. Messages on the platform remain readable to the company. Employees can access them under defined circumstances. Law enforcement agencies can request them through legal channels.

TikTok argues that readable messages allow the company to identify harmful activity.

The debate turns on a basic technical fact. “We can read your messages to catch predators,” and “we can read your messages” describe the same system.

Keep reading

Congress Is Considering Abolishing Your Right to Be Anonymous Online

In August 2024, the Biden administration hosted hundreds of influencers at the White House for the first-ever Creator Economy Conference. Neera Tanden, a senior Biden adviser, took to the stage and bemoaned anonymity online. The influencers alongside her agreed, pushing the idea that anonymous speech on the internet is harmful, and regulation is needed to force the use of real names on social media. The audience whispered excitedly as those on stage spoke about how proposed laws like the Kids Online Safety Act, or KOSA, could unmask every troll. 

This narrative of online safety, particularly in relation to children, has become central to the bipartisan effort to censor and deanonymize the internet for everyone. Today, a package of a dozen “child online safety” bills is moving forward in the House of Representatives with bipartisan support. The laws, framed as a way to crack down on harmful content and make the internet safer, would force social media companies to enact invasive identity verification measures in order to keep children from accessing online spaces.

The problem is that there’s no way to reliably verify someone’s age without verifying who they are. A platform cannot magically discern that a user is 16 without collecting identifying information, whether through government documents such as a passport, payment information like a credit card, or other identity-disclosing data. Whether that data is stored by the platform itself or outsourced to a vendor, the result is always the same: A user’s offline identity is forever linked with their online behavior.

Stripping anonymity from the internet would constitute one of the most sweeping rollbacks of civil rights in recent history. It would allow for unprecedented levels of mass surveillance and censorship, endangering the most marginalized members of society. Whistleblowers exposing corporate wrongdoing could be tracked and fired, government employees speaking out about illegal behavior or bad policies could face prosecution, and activists organizing protests could be identified and surveilled before ever setting foot on the street.

Keep reading

UK Government Secretly Tracked 25 Million People as Potential EV Owners

The UK government spent two years tracking 25 million mobile devices to build a picture of who drives electric cars. Not suspects or criminals. Just ordinary people whose browsing history mentioned EVs often enough to flag them as worth following.

The Department for Transport paid telecoms company O2 £600,000 ($809,000) to run the operation. According to the Telegraph, O2 trawled through its customers’ web browsing histories and app records, flagging anyone who visited an EV-related site at least once a month across two or more months.

That pool extended beyond O2’s own customers to include people on Tesco Mobile, GiffGaff, and Virgin Mobile, networks that run on O2’s infrastructure and whose users had no idea their data was being packaged and sold to a government agency.

Once flagged as a “potential EV owner,” your physical movements were traced across the country. London, the North-West, and the East of England received particular attention.

The techniques are standard in serious organized crime investigations. The DfT applied them to people buying environmentally friendly cars.

Andy Palmer, former executive at Nissan and Aston Martin, put it plainly: “I’m told it’s anonymized and aggregated, and that may well satisfy legal thresholds. But legality and legitimacy are not the same thing.” He added: “If you erode public trust in how that data is gathered, you undermine the very transition you are trying to accelerate.”

The idea of “anonymized” data means very little.

The surveillance ran for two years before the DfT quietly admitted defeat in April 2024, conceding that “mobile data cannot directly be used to provide information around charging behaviour or travel time.”

The program ended not because anyone questioned whether mass tracking of innocent people was appropriate, but because the data turned out to be useless for its stated purpose.

Civil servants from the DfT and Treasury were simultaneously exploring new EV taxes to replace fuel duty revenue. The people being surveilled were doing exactly what government policy encouraged them to do.

Conservative MP Sir David Davis drew the obvious conclusion: “It’s an object lesson in why you can’t trust the state with unfettered access to people’s information, because they’ve obviously taken this information without people’s permission with the objective of disadvantaging them, either by tax or other policy matters. If they’ll do it on this, with people who are doing what the government wants in policy terms, namely, pursuing green policies, what on Earth will they do elsewhere?”

Keep reading

California Law Forces Age-Tracking Into Every Operating System by 2027

California wants to build a surveillance layer into every device its residents touch. Assembly Bill 1043, signed by Governor Gavin Newsom and taking effect January 1, 2027, requires every operating system provider to collect age information from users at account setup and broadcast that data to app developers through a real-time API.

Windows, macOS, Android, iOS, Linux distributions, Valve’s SteamOS: if it runs an operating system, it’s covered by this overreaching law.

The proposals are particularly dumb for open-source Linux operating systems. Linux exists specifically because some people want computing that doesn’t surveil them. That’s not incidental to why the platform exists; it’s foundational.

Distributions like Arch, Debian, and Gentoo have no centralized account infrastructure by design. Users download ISOs from mirrors, modify source code freely, and run systems that report to nobody.

Keep reading

Mexico Mandates Biometric SIM Registration for All Phone Numbers

Anonymous prepaid SIM cards are dying in Mexico. By July 1, 2026, every active cell phone number in the country must be biometrically linked to a named, government-credentialed individual or face suspension. That’s around 127 million numbers, each one tethered to an identity the Mexican government can look up by name.

The mobile registration law took effect January 9, 2026, covering prepaid and postpaid plans, physical SIMs, and eSIMs alike. Existing subscribers have until June 30 to complete registration. New lines activated after January 9 get 30 days. Miss the window, and the line goes dark.

The enforcement mechanism runs through the CURP Biométrica, Mexico’s biometric upgrade to its existing population registry code. The new credential embeds a photograph, electronic signature, and QR code that ties directly to biometrically verified records held in the national registry.

Residents registering a mobile line must provide their CURP number alongside a valid government ID, which makes biometric enrollment not optional but structurally required. You cannot register a phone number without first handing your biometric data to the state.

What Mexico is building here is a national phone network where every number has a face attached to it.

Keep reading

FTC Says Companies Can Collect Kids’ Personal Data, As Long As It’s Called “Age Verification”

The FTC just told companies they can collect children’s personal data without parental consent, as long as it’s for “age verification.”

That’s the practical effect of a policy statement the agency issued this week. Under COPPA, websites collecting data on kids under 13 generally need verifiable parental consent first. The FTC’s new statement carves out an exception: gather whatever personal information you need to verify someone’s age, and the Commission won’t come after you for it.

The agency calls this child protection. The infrastructure it’s enabling looks different.

Christopher Mufarrige, director of the FTC’s Bureau of Consumer Protection, said “Age verification technologies are some of the most child-protective technologies to emerge in decades,” and framed the announcement as a tool for parents.

What the statement actually does is green-light personal data collection from minors, on the theory that knowing someone’s age requires knowing who they are first.

The exemption is conditional. To avoid enforcement, sites must delete age verification data “promptly” after use, restrict third-party sharing to vendors with adequate security assurances, post clear notices about what they’re collecting, and use methods likely to produce “reasonably accurate” results. These requirements are unverifiable by the people whose data gets collected, and enforced by an agency that just announced it won’t enforce.

COPPA supposedly exists precisely because children’s personal data is sensitive and companies can’t be trusted to protect it without legal pressure.

The FTC’s new exemption uses that same sensitive data as the price of admission for age verification, then steps back from enforcement. The agency is weakening the law’s protections in order to expand the infrastructure that the law was supposedly designed to regulate.

Keep reading

Privacy Groups Revolt Against Google’s Demand to Register Every Android Developer

Android’s defining advantage over iOS has always been openness. You could build an app, distribute it yourself, and never touch Google’s systems. That era is about to end unless the open-source community can force Google to back down.

Starting September 2026, any app installed on a certified Android device must be registered by a Google-verified developer. No registration, no installation. The verification demands government-issued identification, agreement to Google’s terms and conditions, and a $25 fee.

Developers who skip Google’s approval process will find their apps blocked, even when distributed entirely outside Google Play, through stores like F-Droid, the Amazon Appstore, or Samsung’s Galaxy Store.

Organizations, including the Electronic Frontier Foundation, the Free Software Foundation, F-Droid, Article 19, Fastmail, and Vivaldi, signed an open letter calling on Alphabet CEO Sundar Pichai, founders Larry Page and Sergey Brin, and app ecosystem chief Vijaya Kaza to kill the policy. Their message is simple: Google is reaching into distribution channels it doesn’t own, doesn’t operate, and has no legitimate authority over.

Keep reading