The UK’s Plan to Put an Age Verification Chaperone in Every Pocket

UK officials are preparing to urge Apple and Google to redesign their operating systems so that every phone and computer sold in the country can automatically block nude imagery unless the user has proved they are an adult.

The proposal, part of the Home Office’s upcoming plan under the premise of combating violence against women and girls, would rely on technology built directly into devices, with software capable of scanning images locally to detect material.

Under the plan, as reported by FT, such scanning would be turned on by default. Anyone wanting to take, send, or open an explicit photo would first have to verify their age using a government-issued ID or a biometric check.

The goal, officials say, is to prevent children from being exposed to sexual material or drawn into exploitative exchanges online.

People briefed on the discussions said the Home Office had explored the possibility of making these tools a legal requirement but decided, for now, to rely on encouragement rather than legislation.

Even so, the expectation is that large manufacturers will come under intense pressure to comply.

The government’s approach reflects growing anxiety about how easily minors can access sexual content and how grooming can occur through everyday apps.

Instead of copying Australia’s decision to ban social media use for under-16s, British ministers have chosen to focus on controlling imagery itself.

Safeguarding minister Jess Phillips has praised technology firms that already filter content at the device level. She cited HMD Global, maker of Nokia phones, for embedding child-protection software called HarmBlock, created by UK-based SafeToNet, which automatically blocks explicit images from being viewed or shared.

Apple and Google have built smaller-scale systems of their own. Apple’s “Communication Safety” function scans photos in apps like Messages, AirDrop, and FaceTime and warns children when nudity is detected, but teens can ignore the alert.

Google’s Family Link and “sensitive content warnings” work similarly on Android, though they stop short of scanning across all apps. Both companies allow parents to apply restrictions, but neither has a universal filter that covers the entire operating system.

The Home Office wants to go further, calling for a system that would block any nude image unless an adult identity check has been passed.

Keep reading

UK Lawmakers Propose Mandatory On-Device Surveillance and VPN Age Verification

Lawmakers in the United Kingdom are proposing amendments to the Children’s Wellbeing and Schools Bill that would require nearly all smartphones and tablets to include built-in, unremovable surveillance software.

The proposal appears under a section titled “Action to promote the well-being of children by combating child sexual abuse material (CSAM).”

We obtained a copy of the proposed amendments for you here.

The amendment text specifies that any “relevant device supplied for use in the UK must have installed tamper-proof system software which is highly effective at preventing the recording, transmitting (by any means, including livestreaming) and viewing of CSAM using that device.”

It further defines “relevant devices” as “smartphones or tablet computers which are either internet-connectable products or network-connectable products for the purposes of section 5 of the Product Security and Telecommunications Infrastructure Act 2022.”

Under this clause, manufacturers, importers, and distributors would be legally required to ensure that every internet-connected phone or tablet they sell in the UK meets this “CSAM requirement.”

Enforcement would occur “as if the CSAM requirement was a security requirement for the purposes of Part 1 of the Product Security and Telecommunications Infrastructure Act 2022.”

In practical terms, the only way for such software to “prevent the recording, transmitting (by any means, including livestreaming) and viewing of CSAM” would be for devices to continuously scan and analyze all photos, videos, and livestreams handled by the device.

That process would have to take place directly on users’ phones and tablets, examining both personal and encrypted material to determine whether any of it might be considered illegal content. Although the measure is presented as a child-safety protection, its operation would create a system of constant client-side scanning.

This means the software would inspect private communications, media, and files on personal devices without the user’s consent.

Such a mechanism would undermine end-to-end encryption and normalize pre-emptive surveillance built directly into consumer hardware.

The latest figures from German law enforcement offer a clear warning about the risks of expanding this type of surveillance: in 2024, nearly half of all CSAM scanning tips received by Germany were errors.

Keep reading

House Lawmakers Unite in Moral Panic, Advancing 18 “Kids’ Online Safety” Bills That Expand Surveillance and Weaken Privacy

The House Energy & Commerce Subcommittee on Commerce, Manufacturing, and Trade spent its latest markup hearing on Thursday proving that if there’s one bipartisan passion left in Washington, it’s moral panic about the internet.

Eighteen separate bills on “kids’ online safety” were debated, amended, and then promptly advanced to the full committee. Not one was stopped.

Ranking Member Jan Schakowsky (D) set the tone early, describing the bills as “terribly inadequate” and announcing she was “furious.”

She complained that the package “leaves out the big issues that we are fighting for.” If it’s not clear, Schakowsky is complaining that the already-controversial bills don’t go far enough.

Eighteen bills now move forward, eight of which hinge on some form of age verification, which would likely require showing a government ID. Three: App Store Accountability (H.R. 3149), the SCREEN Act (H.R. 1623), and the Parents Over Platforms Act (H.R. 6333), would require it outright.

The other five rely on what lawmakers call the “actual knowledge” or “willful disregard” standards, which sound like legalese but function as a dare to platforms: either know everyone’s age, or risk a lawsuit.

The safest corporate response, of course, would be to treat everyone as a child until they’ve shown ID.

Keep reading

Australia launches youth social media ban it says will be the world’s ‘first domino’

Can children and teenagers be forced off social media en masse? Australia is about to find out.

More than 1 million social media accounts held by users under 16 are set to be deactivated in Australia on Wednesday in a divisive world-first ban that has inflamed a culture war and is being closely watched in the United States and elsewhere.

Social media companies will have to take “reasonable steps” to ensure that under-16s in Australia cannot set up accounts on their platforms and that existing accounts are deactivated or removed.

Australian officials say the landmark ban, which lawmakers swiftly approved late last year, is meant to protect children from addictive social media platforms that experts say can be disastrous for their mental health.

“With one law, we can protect Generation Alpha from being sucked into purgatory by predatory algorithms described by the man who created the feature as ‘behavioral cocaine,’” Communications Minister Anika Wells told the National Press Club in Canberra last week.

While many parents and even their children have welcomed the ban, others say it will hinder young people’s ability to express themselves and connect with others, as well as access online support that is crucial for those from marginalized groups or living in isolated parts of rural Australia. Two 15-year-olds have brought a legal challenge against it to the nation’s highest court.

Supporters say the rest of the world will soon follow the example set by the Australian ban, which faced fierce resistance from social media companies.

“I’ve always referred to this as the first domino, which is why they pushed back,” Julie Inman Grant, who regulates online safety as Australia’s eSafety Commissioner, said at an event in Sydney last week.

Keep reading

Australian Leaders and Legacy Media Celebrates Launch of Online Digital ID Age Verification Law

It was sold as a “historic day,” the kind politicians like to frame with national pride and moral purpose.

Cameras flashed in Canberra as Australia’s Prime Minister Anthony Albanese stood at the podium, declaring victory in the fight to “protect children.”

What Australians actually got was a nationwide digital ID system. Starting December 10, every citizen logging into select online platforms must now pass through digital ID verification, biometric scans, face matching, and document checks, all justified as a way to keep under-16s off social media.

Kids are now banned from certain platforms, but it’s the adults who must hand over their faces, IDs, and biometric data to prove they’re not kids.

“Protecting children” has been converted into a universal surveillance upgrade for everyone.

According to Albanese, who once said if he became a dictator the first thing he would do was ban social media, the Online Safety Amendment (Social Media Minimum Age) Bill 2024 will “change lives.”

He described it as a “profound reform” that will “reverberate around the world,” giving parents “peace of mind” and inspiring “the global community” to copy Australia’s example.

The Prime Minister’s pride, he said, had “never been greater.” Listening to him, you’d think he’d cured cancer rather than making face scans mandatory to log in to Facebook.

Keep reading

Lawmakers To Consider 19 Bills for Childproofing the Internet

Can you judge the heat of a moral panic by the number of bills purporting to solve it? At the height of human trafficking hysteria in the 2010s, every week seemed to bring some new measure meant to help the government tackle the problem (or at least get good press for the bill’s sponsor). Now lawmakers have moved on from sex trafficking to social media—from Craigslist and Backpage to Instagram, TikTok, and Roblox. So here we are, with a House Energy and Commerce subcommittee hearing on 19 different kids-and-tech bills scheduled for this week.

The fun kicks off tomorrow, with legislators discussing yet another version of the Kids Online Safety Act (KOSA)—a dangerous piece of legislation that keeps failing but also refuses to die. (See some of Reason‘s previous coverage of KOSA herehere, and here.)

The new KOSA no longer explicitly says that online platforms have a “duty of care” when it comes to minors—a benign-sounding term that could have chilled speech by requiring companies to somehow protect minors from a huge array of “harms,” from anxiety and depression to disordered eating to spending too much time online. But it still essentially requires this, saying that covered platforms must “establish, implement, maintain, and enforce reasonable policies, practices, and procedure” that address various harms to minors, including threats, sexual exploitation, financial harm, and the “distribution, sale, or use of narcotic drugs, tobacco products, cannabis products, gambling, or alcohol.” And it would give both the states and the Federal Trade Commission the ability to enforce this requirement, declaring any violation an “unfair or deceptive” act that violates the Federal Trade Commission Act.

Despite the change, KOSA’s core function is still “to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to” some harm, as Joe Mullin wrote earlier this year about a similar KOSA update in the Senate.

Language change or not, the bill would still compel platforms to censor a huge array of content out of fear that the government might decide it contributed to some vague category of harm and then sue.

KOSA is bad enough. But far be it for lawmakers to stop there.

Keep reading

As Expected, a Hearing on Kids Online Safety Becomes a Blueprint for Digital ID

The latest congressional hearing on “protecting children online” opened as you would expect: the same characters, the same script, a few new buzzwords, and a familiar moral panic to which the answer is mass surveillance and censorship.

The Subcommittee on Commerce, Manufacturing, and Trade had convened to discuss a set of draft bills packaged as the “Kids Online Safety Package.” The name alone sounded like a software update against civil liberties.

The hearing was called “Legislative Solutions to Protect Children and Teens Online.” Everyone on the dais seemed eager to prove they were on the side of the kids, which meant, as usual, promising to make the internet less free for everyone else.

Rep. Gus Bilirakis (R-FL), who chaired the hearing, kicked things off by assuring everyone that the proposed bills were “mindful of the Constitution’s protections for free speech.”

He then reminded the audience that “laws with good intentions have been struck down for violating the First Amendment” and added, with all the solemnity of a man about to make that same mistake again, that “a law that gets struck down in court does not protect a child.”

They know these bills are legally risky, but they’re going to do it anyway.

Bilirakis’s point was echoed later by House Energy & Commerce Committee Chairman Brett Guthrie (R-KY), who claimed the bills had been “curated to withstand constitutional challenges.” That word, curated, was doing a lot of work.

Guthrie went on to insist that “age verification is needed…even before logging in” to trigger privacy protections under COPPA 2.0.

The irony of requiring people to surrender their private information in order to be protected from privacy violations was lost in the shuffle.

Keep reading

YouTube says it will comply with Australia’s teen social media ban

Google’s YouTube shared a “disappointing update” to millions of Australian users and content creators on Wednesday, saying it will comply with a world-first teen social media ban by locking out users aged under 16 from their accounts within days.

The decision ends a stand-off between the internet giant and the Australian government which initially exempted YouTube from the age restriction, citing its use for educational purposes. Google (GOOGL.O) had said it was getting legal advice about how to respond to being included.

“Viewers must now be 16 or older to sign into YouTube,” the company said in a statement.

“This is a disappointing update to share. This law will not fulfill its promise to make kids safer online and will, in fact, make Australian kids less safe on YouTube.”

The Australian ban is being closely watched by other jurisdictions considering similar age-based measures, setting up a potential global precedent for how the mostly U.S. tech giants behind the biggest platforms balance child safety with access to digital services.

The Australian government says the measure responds to mounting evidence that platforms are failing to do enough to protect children from harmful content.

Keep reading

Congress Goes Parental on Social Media and Your Privacy

Washington has finally found a monster big enough for bipartisan unity: the attention economy. In a moment of rare cross-aisle cooperation, lawmakers have introduced two censorship-heavy bills and a tax scheme under the banner of the UnAnxious Generation package.

The name, borrowed from Jonathan Haidt’s pop-psychology hit The Anxious Generation, reveals the obvious pitch: Congress will save America’s children from Silicon Valley through online regulation and speech controls.

Representative Jake Auchincloss of Massachusetts, who has built a career out of publicly scolding tech companies, says he’s going “directly at their jugular.”

The plan: tie legal immunity to content “moderation,” tax the ad money, and make sure kids can’t get near an app without producing an “Age Signal.” If that sounds like a euphemism for surveillance, that’s because it is.

The first bill, the Deepfake Liability Act, revises Section 230, the sacred shield that lets platforms host your political rants, memes, and conspiracy reels without getting sued for them.

Under the new proposal, that immunity becomes conditional on a vague “duty of care” to prevent deepfake porn, cyberstalking, and “digital forgeries.”

TIME’s report doesn’t define that last term, which could be a problem since it sounds like anything from fake celebrity videos to an unflattering AI meme of your senator. If “digital forgery” turns out to include parody or satire, every political cartoonist might suddenly need a lawyer on speed dial.

Auchincloss insists the goal is accountability, not censorship. “If a company knows it’ll be liable for deepfake porn, cyberstalking, or AI-created content, that becomes a board-level problem,” he says. In other words, a law designed to make executives sweat.

But with AI-generated content specifically excluded from Section 230 protections, the bill effectively redefines the internet’s liability protections.

Keep reading

Congress Pushes for Nationwide Internet Age Verification Plan.

Republican lawmakers are proposing a new way to enforce accountability on tech companies to comply with age verification laws, despite resistance from websites like Pornhub. The App Store Accountability Act (ASA), introduced by Senator Mike Lee (R-UT) and Representative John James (R-MI), proposes a different model: requiring app stores themselves to verify users’ ages and pass that information to apps when they are downloaded.

The bill is part of a broader push in Congress to tighten safeguards for minors online and has earned support from major tech companies, including Facebook parent company Meta, Pinterest, and Snap. Pinterest CEO Bill Ready argues that one standard would simplify the process and reduce the confusion created by a patchwork of state requirements. “The need for a federal standard is urgent,” he said.

“I think most people at most of these companies probably do want to protect kids,” Sen. Lee said, adding that support from tech companies like Pinterest “makes a big difference.”

However, the proposal faces resistance from civil liberties groups and digital rights advocates. Critics warn that compulsory age verification could limit access to lawful online content, raising First Amendment concerns. They also cite significant privacy risks, arguing that systems requiring users to submit sensitive personal information could expose them to data breaches or misuse.

Some major websites have rejected attempts to enforce online age verification. Pornhub has withdrawn its services from states that require government-issued ID or similar credentials for access to adult material. The company argued that these laws push users toward unregulated platforms while forcing supposedly legitimate sites to collect data they would prefer not to hold.

In 2025, the Supreme Court upheld a state age-verification law for explicit content in Texas, with the majority concluding that states may require age checks to prevent minors from viewing harmful material.

Supporters of federal action contend that the ASA would avoid the growing compliance difficulties posed by differing state regulations. Sen. Lee has stated, “I don’t believe that there’s anything unlawful, unconstitutional, or otherwise problematic about this legislation,” arguing that an app-store-centered approach would reduce repeated verification across multiple platforms.

Keep reading