Lawmakers To Consider 19 Bills for Childproofing the Internet

Can you judge the heat of a moral panic by the number of bills purporting to solve it? At the height of human trafficking hysteria in the 2010s, every week seemed to bring some new measure meant to help the government tackle the problem (or at least get good press for the bill’s sponsor). Now lawmakers have moved on from sex trafficking to social media—from Craigslist and Backpage to Instagram, TikTok, and Roblox. So here we are, with a House Energy and Commerce subcommittee hearing on 19 different kids-and-tech bills scheduled for this week.

The fun kicks off tomorrow, with legislators discussing yet another version of the Kids Online Safety Act (KOSA)—a dangerous piece of legislation that keeps failing but also refuses to die. (See some of Reason‘s previous coverage of KOSA herehere, and here.)

The new KOSA no longer explicitly says that online platforms have a “duty of care” when it comes to minors—a benign-sounding term that could have chilled speech by requiring companies to somehow protect minors from a huge array of “harms,” from anxiety and depression to disordered eating to spending too much time online. But it still essentially requires this, saying that covered platforms must “establish, implement, maintain, and enforce reasonable policies, practices, and procedure” that address various harms to minors, including threats, sexual exploitation, financial harm, and the “distribution, sale, or use of narcotic drugs, tobacco products, cannabis products, gambling, or alcohol.” And it would give both the states and the Federal Trade Commission the ability to enforce this requirement, declaring any violation an “unfair or deceptive” act that violates the Federal Trade Commission Act.

Despite the change, KOSA’s core function is still “to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to” some harm, as Joe Mullin wrote earlier this year about a similar KOSA update in the Senate.

Language change or not, the bill would still compel platforms to censor a huge array of content out of fear that the government might decide it contributed to some vague category of harm and then sue.

KOSA is bad enough. But far be it for lawmakers to stop there.

Keep reading

US Under Secretary Warns Britain That the First Amendment Isn’t Negotiable

This week, Sarah Rogers, the US Under Secretary of State for Public Diplomacy, touched down in the UK not to sip tea or admire the Crown Jewels, but to deliver a message as subtle as a boot in the face: stop trying to censor Americans in America.

Yes, really.

According to Rogers, the UK’s speech regulator, Ofcom, the bureaucratic enforcer behind Britain’s censorship law, the Online Safety Act (OSA), has been getting ideas. Dangerous ones. Like attempting to extend its censorship regime outside the United Kingdom and onto American soil. You know, that country across the ocean where the First Amendment exists and people can still say controversial things without a court summons landing on their doormat.

To GB News, Rogers called this attempt at international thought-policing “a deal-breaker,” “a non-starter,” and “a red line.”

In State Department speak, that is basically the equivalent of someone slamming the brakes, looking Britain in the eye, and saying, “You try that again, and there will be consequences.”

To understand how Britain got itself into this mess, you have to understand the Online Safety Act. It is a law that reads like it was drafted by a committee of alarmed Victorian schoolteachers who just discovered the internet.

The OSA is supposedly designed to “protect children online,” which sounds noble until you realize it means criminalizing large swaths of adult speech, forcing platforms to delete legal content, and requiring identity and age checks that would make a KGB officer blush.

It even threatens prosecution over “psychological harm.” And now, apparently, it wants to enforce all of that in other countries too.

Rogers was not impressed, saying Ofcom has tried to impose the OSA extraterritorially and attempted to censor Americans in America. That, she made clear, is outrageous.

It’s more than a diplomatic spat. Rogers made it painfully clear the US isn’t going to just write a sternly worded letter and move on. There is legislative retaliation on the table.

The GRANITE Act, Guaranteeing Rights Against Novel International Tyranny & Extortion, is more than a clever acronym. It is the legislative middle finger Washington can consider if the UK keeps pretending it can veto American free speech from 3,500 miles away.

The bill, already circulating in the Wyoming state legislature, would strip foreign governments of their usual protections from lawsuits in the US if they try to censor American citizens or companies.

In other words, if Ofcom wants to slap US platforms with foreign censorship rules, they had better be ready to defend themselves in an American courtroom where “freedom of expression” isn’t a slogan, it is a constitutional right.

Rogers confirmed that the US legislature will likely consider that and will certainly consider other options if the British government doesn’t back down.

Of course, the GRANITE Act didn’t come out of nowhere. Rogers’s warning didn’t either. It is a response to the increasingly unhinged state of free speech in the UK, where adults can be arrested for memes, priests investigated for praying silently, and grandmothers interrogated for criticizing gender ideology.

“When you don’t rigorously defend that right, even when it’s inconvenient, even when the speech is offensive,” Rogers said, “you end up in these absurd scenarios where you have comedians arrested for tweets.”

This is the modern UK, where “hate speech” has been stretched to include everything from telling jokes to sharing news stories about immigration. And now, under the OSA, that censorious spirit has gone global.

Keep reading

Goodbye Jury Trials, Hello Digital ID: 10 “recommendations” from the Crime and Justice Commission

The Times Crime and Justice Commission was established last year, with its mission statement being to…

consider the future of policing and the criminal justice system, in the light of the knife crime crisis, a shoplifting epidemic, the growing threat of cybercrime, concerns about the culture of the police, court backlogs, problems with legal aid and overflowing prisons.

And today is that long-promised glorious golden day where they reveal their findings. The white smoke has gone up and we get to witness the result of their long hours of toil.

How are we going to fix everything?

Let’s take a look at the complete list, with some helpful annotations:

1. Introduce a universal digital ID system to drive down fraud, tackle illegal immigration and reduce identity theft;

Digital ID for everybody! It’s going to solve every problem! We’ve talked this to death, it was always going to be in here.

2. Target persistent offenders and crime hotspots using data to clamp down on shoplifting, robbery and antisocial behaviour;

That’s about surveillance. “Data” means your private data which they will get from social media companies.

3. Roll out live facial recognition and other artificial intelligence tools to drive the efficiency and effectiveness of the police;

Again, FRT was always going to feature. I’m not sure what “other artificial intelligence tools” means, but the vagueness is likely the point. “Efficiency” is the word doing the heavy-lifting in that sentence, intended to capture the pro-MAGA, pro-Musk UK crowd.

4. Create a licence to practise for the police, with revalidation every five years to improve culture and enhance professionalism;

That’s just throwing something out for the “other side”. So far it’s all just more powers for the police and courts, this adds some faux accountability framework into the mix to make it look fair.

5. Set up victim care hubs backed by a unified digital case file to create a seamless source of information and advice;

Same as above, with some extra seasoning for the digital identity sales pitch thrown in.

6. Introduce a new intermediate court with a judge and two magistrates to speed up justice and reduce court delays;

This is about replacing trial by jury, and that’s all it’s about. It’s something they’ve been wanting to do for years and keep making excuses to try.

7. Move to a “common sense” approach to sentencing with greater transparency about jail time, incentives for rehabilitation and expanded use of house arrest;

Not sure what this means in real terms, but any use “common sense” in this kind of document should always raise an eyebrow. As should the idea of “expanded use of house arrest”.

8. Give more autonomy and accountability to prison governors with a greater focus on rehabilitation and create a College of Prison and Probation Officers;

No idea what this means yet. Could be about more prison-based work programs (a la private prisons in the US), could just be fluff between important parts.

9. Restrict social media for under-16s to protect children from criminals and extreme violent or sexual content;

Again, very predictable. And, again, very dishonest. As we’ve said a thousand times, “restricting social media to under-16s” – in practical terms – means everyone on social media has to verify their age. So bye-bye online anonymity.

Keep reading

Australia: Meta begins deactivating accounts ahead of 16-year-old minimum social media age limit

Meta has begun removing social media accounts belonging to Australian children under 16 years old from its platforms, Instagram, Facebook and Threads.

The tech giant has started notifying users aged 13 to 15 years old that their accounts would cease to exist on December 4th. Starting December 10th, social media companies will face fines up to A$49.5 million ($33million USD) should they fail to take steps to halt children under 16 years old from owning accounts.

Australian eSafety Commissioners will send major platforms notices on December 11th demanding statistics about exactly how many accounts were removed from their sites. Additionally, monthly notices are planned for 2026.

It is estimated that 150,000 Facebook accounts and 325,000 Instagram users will be terminated. 

“The government recognizes that age assurance may require several days or weeks to complete fairly and accurately,” Communications Minister Anika Wells reported.

“However, if eSafety identifies systemic breaches of the law, the platforms will face fines,” she added.

Google sent out a notice on Wednesday stating that anyone in Australia under 16 would be signed out of YouTube on December 10th and will lose features accessible available only to account holders, such as playlists.

Google states it determines YouTube users’ ages “based on personal data contained in associated Google accounts and other signals.”

“We have consistently said this rushed legislation misunderstands our platform, the way young Australians use it and, most importantly, it does not fulfill its promise to make kids safer online,” a Google statement reported.

Users over 16 years old who were wrongfully revoked account access have the option to verify their age through government-issued ID or a Video selfie, per Meta.

Platforms such as X and Reddit contacted underage users, suggesting that they download their posted pictures and freeze their accounts until they become of age.

The Australian government claims the ban will protect children from the harms of social media. However, critics say this decision may isolate certain groups who depend on the platforms for connection and push children to other, potentially more harmful corners of the internet.

Keep reading

As Expected, a Hearing on Kids Online Safety Becomes a Blueprint for Digital ID

The latest congressional hearing on “protecting children online” opened as you would expect: the same characters, the same script, a few new buzzwords, and a familiar moral panic to which the answer is mass surveillance and censorship.

The Subcommittee on Commerce, Manufacturing, and Trade had convened to discuss a set of draft bills packaged as the “Kids Online Safety Package.” The name alone sounded like a software update against civil liberties.

The hearing was called “Legislative Solutions to Protect Children and Teens Online.” Everyone on the dais seemed eager to prove they were on the side of the kids, which meant, as usual, promising to make the internet less free for everyone else.

Rep. Gus Bilirakis (R-FL), who chaired the hearing, kicked things off by assuring everyone that the proposed bills were “mindful of the Constitution’s protections for free speech.”

He then reminded the audience that “laws with good intentions have been struck down for violating the First Amendment” and added, with all the solemnity of a man about to make that same mistake again, that “a law that gets struck down in court does not protect a child.”

They know these bills are legally risky, but they’re going to do it anyway.

Bilirakis’s point was echoed later by House Energy & Commerce Committee Chairman Brett Guthrie (R-KY), who claimed the bills had been “curated to withstand constitutional challenges.” That word, curated, was doing a lot of work.

Guthrie went on to insist that “age verification is needed…even before logging in” to trigger privacy protections under COPPA 2.0.

The irony of requiring people to surrender their private information in order to be protected from privacy violations was lost in the shuffle.

Keep reading

Macron Wants To Go Full “Ministry Of Truth” With Draconian Censorship Grab

French President Emmanuel Macron is facing fierce pushback from conservative voices within France over his renewed drive to grant the state sweeping new censorship powersBarron’s reports.

On Friday, Macron once again raised the alarm about so-called “disinformation” spreading on social media, insisting that parliament grant authorities the ability to immediately block content deemed “false information.” As if the existing arsenal of censorship tools weren’t enough, the left-wing president now wants to establish a “professional certification” system that would effectively create an official, state-approved class of media outlets—separating those that toe the government’s ethical line from those that refuse to do so.

France’s right-wing press has reacted with outrage, with Vincent Bolloré’s Journal du Dimanche denouncing Macron’s “totalitarian drift” on free speech and warning of “the temptation of a ministry of truth.”

Bolloré-owned CNews and Europe 1 were equally scathing, with popular presenter Pascal Praud accusing the president of acting out of personal resentment, declaring the initiative comes from a “president unhappy with his treatment by the media and who wants to impose a single narrative.”

National Rally leader Jordan Bardella also delivered a blistering rebuke, saying in a statement, “Tampering with freedom of expression is an authoritarian temptation, which corresponds to the solitude of a man… who has lost power and seeks to maintain it by controlling information.”

Bruno Retailleau, head of the Republicans in the Senate, echoed the warning on X: “[N]o government has the right to filter the media or dictate the truth.”

Keep reading

EU Push to Make Message Scanning Permanent Despite Evidence of Failure and Privacy Risks

The European Union has a habit of turning its worst temporary ideas into permanent fixtures. This time it is “Chat Control 1.0,” the 2021 law that lets tech companies scan everyone’s private messages in the name of child protection.

It was supposed to be a stopgap measure, a temporary derogation of privacy rights until proper evidence came in.

Now, if you’ve been following our previous reporting, you’ll know the Council wants to make it permanent, even though the Commission’s own 2025 evaluation report admits it has no evidence the thing actually works.

We obtained a copy of the report for you here.

The report doesn’t even hide the chaos. It confesses to missing data, unproven results, and error rates that would embarrass a basic software experiment.

Yet its conclusion jumps from “available data are insufficient” to “there are no indications that the derogation is not proportionate.” That is bureaucratic logic at its blandest.

The Commission’s Section 3 conclusion includes the sentence “the available data are insufficient to provide a definitive answer” on proportionality, followed immediately by “there are no indications that the derogation is not proportionate.”

In plain language, they can’t prove the policy isn’t violating rights, but since they can’t prove that it is, they will treat it as acceptable.

The same report admits it can’t even connect the dots between all that scanning and any convictions. Section 2.2.3 states: “It is not currently possible…to establish a clear link between these convictions and the reports submitted by providers.” Germany and Spain didn’t provide usable figures.

Keep reading

Meta Pushes Canada for App Store Age Verification ID Laws

Meta is working to convince the Canadian government to introduce new laws that would make age verification mandatory at the app store level. The company has been lobbying Ottawa for months and says it has received positive feedback from officials drafting online safety legislation.

To support its push, Meta paid Counsel Public Affairs to poll Canadians on what kinds of digital safety measures they want for teens.

The poll found that 83 percent of parents favor requiring app stores to confirm users’ ages before app downloads.

Meta highlighted those results, saying “the Counsel data clearly indicates that parents are seeking consistent, age-appropriate standards that better protect teens and support parents online. And the most effective way to understand this is by obtaining parental approval and verifying age on the app store.”

Rachel Curran, Meta Canada’s director of public policy, described the idea as “by far the most effective, privacy-protective, efficient way to determine a user’s age.”

That phrase may sound privacy-conscious, but in practice, the plan would consolidate control over personal data inside a small circle of corporations such as Meta, Apple, and Google, while forcing users to identify themselves to access basic online services.

Google has criticized Meta’s proposal, calling it an attempt to avoid direct responsibility. “Time and time again, all over the world, you’ve seen them push forward proposals that would have app stores change their practices and do something new without any change by Meta,” said Kareem Ghanem, Google’s senior director of government affairs.

Behind these corporate disputes lies a much bigger question: should anyone be required to verify their identity in order to use the internet?

Keep reading

Congress Pushes for Nationwide Internet Age Verification Plan.

Republican lawmakers are proposing a new way to enforce accountability on tech companies to comply with age verification laws, despite resistance from websites like Pornhub. The App Store Accountability Act (ASA), introduced by Senator Mike Lee (R-UT) and Representative John James (R-MI), proposes a different model: requiring app stores themselves to verify users’ ages and pass that information to apps when they are downloaded.

The bill is part of a broader push in Congress to tighten safeguards for minors online and has earned support from major tech companies, including Facebook parent company Meta, Pinterest, and Snap. Pinterest CEO Bill Ready argues that one standard would simplify the process and reduce the confusion created by a patchwork of state requirements. “The need for a federal standard is urgent,” he said.

“I think most people at most of these companies probably do want to protect kids,” Sen. Lee said, adding that support from tech companies like Pinterest “makes a big difference.”

However, the proposal faces resistance from civil liberties groups and digital rights advocates. Critics warn that compulsory age verification could limit access to lawful online content, raising First Amendment concerns. They also cite significant privacy risks, arguing that systems requiring users to submit sensitive personal information could expose them to data breaches or misuse.

Some major websites have rejected attempts to enforce online age verification. Pornhub has withdrawn its services from states that require government-issued ID or similar credentials for access to adult material. The company argued that these laws push users toward unregulated platforms while forcing supposedly legitimate sites to collect data they would prefer not to hold.

In 2025, the Supreme Court upheld a state age-verification law for explicit content in Texas, with the majority concluding that states may require age checks to prevent minors from viewing harmful material.

Supporters of federal action contend that the ASA would avoid the growing compliance difficulties posed by differing state regulations. Sen. Lee has stated, “I don’t believe that there’s anything unlawful, unconstitutional, or otherwise problematic about this legislation,” arguing that an app-store-centered approach would reduce repeated verification across multiple platforms.

Keep reading

Tokyo Court Ruling Against Cloudflare Sets “Dangerous Precedent” for Internet Infrastructure Liability

Cloudflare has been ordered by a Tokyo District Court to pay 500 million yen, about 3.2 million US dollars, after judges ruled the company liable for aiding copyright infringement.

The decision, as reported by TorrentFreak, brought by a coalition of Japan’s largest manga publishers, challenges the long-held understanding that network infrastructure providers are not responsible for what passes through their systems.

It also signals a growing international push to make companies like Cloudflare police online content, an approach that could redefine how the open internet operates.

The publishers, Shueisha, Kodansha, Kadokawa, and Shogakukan, argued that Cloudflare’s global network, which caches and accelerates websites, helped pirate manga sites distribute illegal copies of their work. They said Cloudflare’s failure to verify customer identities allowed those sites to hide “under circumstances where strong anonymity was secured,” a factor the court said contributed to its finding of liability.

Cloudflare said it will appeal, calling the ruling a threat to fairness and due process and warning that it could have broad implications for the future of internet infrastructure. The company argues that its conduct complies with global norms and that it has no direct control over the content its clients publish or distribute.

The legal fight between Cloudflare and Japan’s major publishers began in 2018. The publishers asked the Tokyo District Court to intervene, claiming Cloudflare’s technology enabled piracy sites to thrive. They wanted the company to sever ties with the offending domains.

In 2019, a partial settlement was reached. The deal, later disclosed, required Cloudflare to stop replicating content from sites only after Japanese courts officially declared them illegal.

That agreement quieted the conflict for a time, but it did not resolve the larger question of whether a network service should be required to decide which content is lawful.

By early 2022, the same publishers returned to court, alleging that Cloudflare had failed to take “necessary measures” against known infringing sites.

They filed a new claim targeting four specific works and sought around four million dollars in damages. They also asked for an order that would compel Cloudflare to terminate service for illegal sites.

Keep reading