UK Parliament Rejects Petition to Repeal Online Censorship Law, Calls for Expanded Censorship

This week in the UK, Parliament held a debate in response to a public petition that gathered hundreds of thousands of signatures calling for the repeal of the Online Safety Act (OSA).

It was a rare opportunity for elected officials to prove they still listen to their constituents.

Instead, the overwhelming message from MPs was clear: thanks for your concern, but we’d actually like even more control over what you can do online.

One by one, MPs stood up not to defend free expression, or question whether one of the most radical internet control laws in modern British history might have gone too far, but to argue that it hadn’t gone far enough.

“It’s Not Censorship, It’s Responsibility” (Apparently)

Lizzi Collinge, Labour MP for Morecambe and Lunesdale, insisted the OSA “is not about controlling speech.” She claimed it was about giving the online world the same “safety features” as the offline one.

This was a recurring theme throughout the debate: reassure the public that speech isn’t being restricted while calling for more mechanisms to restrict it.

Ian Murray, Minister for Digital Government and Data, also insisted the OSA protects freedom of expression. According to him, there’s no contradiction in saying people can speak freely, as long as they’re age-verified, avoid VPNs, and don’t say anything that might be flagged by a government regulator.

It’s a neat trick. Say you support free speech, then build an entire law designed to monitor, filter, and police it.

VPNs in the Firing Line

There is a growing fixation inside government with VPNs. These are basic privacy tools used by millions of people every day, often to protect their data. But several MPs, including Jim McMahon, Julia Lopez, and Ian Murray, suggested VPNs should be subject to age verification or regulatory restrictions.

It’s unclear whether these MPs understand how VPNs work or if they simply dislike the idea of anyone browsing the internet without supervision.

Either way, the intent is clear. The government wants fewer ways for people to browse anonymously.

Keep reading

Porn Sites Must Block VPNs To Comply With Indiana’s Age-Verification Law, State Suggests in New Lawsuit

Indiana Attorney General Todd Rokita is suing dozens of porn websites, claiming that they are in violation of the state’s age-verification law and seeking “injunctive relief, civil penalties, and recovery of costs incurred to investigate and maintain the action.”

Last year, Indiana Senate Bill 17 mandated that websites featuring “material harmful to minors” must verify that visitors are age 18 or above. Rather than start checking IDs, Aylo—the parent company of Pornhub and an array of other adult websites—responded by blocking access for Indiana residents.

Now, Indiana says this is not good enough. To successfully comply, Pornhub and other Aylo platforms (which include Brazzers, Youporn, and Redtube, among others) must also block virtual private networks and other tools that allow internet users to mask their IP addresses, the state suggests.

This is an insane—and frighteningly dystopian—interpretation of the law.

Broad Anti-Privacy Logic

In a section of the suit detailing how Aylo allegedly violated the age-check law, Indiana notes that last July, “an investigator employed by the Office of the Indiana Attorney General (‘OAG Investigator’) accessed Pornhub.com from Indiana using a Virtual Private Network (VPN) with a Chicago, Illinois IP address.”

“Defendants have not implemented any reasonable form of age verification on its website Pornhub.com,” the suit states. It goes on to detail how Indiana investigators also accessed Brazzers.com, Faketaxi.com, Spicevids.com, and other adult websites using a VPN.

Keep reading

Privacy For The Powerful, Surveillance For The Rest: EU’s Proposed Tech Regulation Goes Too Far

Last month, we lamented California’s Frontier AI Act of 2025. The Act favors compliance over risk management, while shielding bureaucrats and lawmakers from responsibility. Mostly, it imposes top-down regulatory norms, instead of letting civil society and industry experts experiment and develop ethical standards from the bottom up.

Perhaps we could dismiss the Act as just another example of California’s interventionist penchant. But some American politicians and regulators are already calling for the Act to be a “template for harmonizing federal and state oversight.” The other source for that template would be the European Union (EU), so it’s worth keeping an eye on the regulations spewed out of Brussels.

The EU is already way ahead of California in imposing troubling, top-down regulation. Indeed, the EU Artificial Intelligence Act of 2024 follows the EU’s overall precautionary principle. As the EU Parliament’s internal think tank explains, “the precautionary principle enables decision-makers to adopt precautionary measures when scientific evidence about an environmental or human health hazard is uncertain and the stakes are high.” The precautionary principle gives immense power to the EU when it comes to regulating in the face of uncertainty — rather than allowing for experimentation with the guardrails of fines and tort law (as in the US). It stifles ethical learning and innovation. Because of the precautionary principle and associated regulation, the EU economy suffers from greater market concentration, higher regulatory compliance costs, and diminished innovation — compared to an environment that allows for experimentation and sensible risk management. It is small wonder that only four of the world’s top 50 tech companies are European.

From Stifled Innovation to Stifled Privacy

Along with the precautionary principle, the second driving force behind EU regulation is the advancement of rights — but cherry-picking from the EU Charter of Fundamental Rights of rights that often conflict with others. For example, the EU’s General Data Protection Regulation (GDPR) of 2016 was imposed with the idea of protecting a fundamental right to personal data protection (this is technically separate from the right to privacy, and gives the EU much more power to intervene — but that is the stuff of academic journals). The GDPR ended up curtailing the right to economic freedom.

This time, fundamental rights are being deployed to justify the EU’s fight against child sexual abuse. We all love fundamental rights, and we all hate child abuse. But, over the years, fundamental rights have been deployed as a blunt and powerful weapon to expand the EU’s regulatory powers. The proposed Child Sex Abuse regulation (CSA) is no exception. What is exceptional, is the extent of the intrusion: the EU is proposing to monitor communications among European citizens, lumping them all together as potential threats rather than as protected speech that enjoys a prima facie right to privacy.

As of 26 November 2025, the EU bureaucratic machine has been negotiating the details of the CSA. In the latest draft, mandatory scanning of private communications has thankfully been removed, at least formally. But there is a catch. Providers of hosting and interpersonal communication services must identify, analyze, and assess how their services might be used for online child sexual abuse, and then take “all reasonable mitigation measures.” Faced with such an open-ended mandate and the threat of liability, many providers may conclude that the safest — and most legally prudent — way to show they have complied with the EU directive is to deploy large-scale scanning of private communications.

The draft CSA insists that mitigation measures should, where possible, be limited to specific parts of the service or specific groups of users. But the incentive structure points in one direction. Widespread monitoring may end up as the only viable option for regulatory compliance. What is presented as voluntary today risks becoming a de facto obligation tomorrow.

In the words of Peter Hummelgaard, the Danish Minister of Justice: “Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse. This is completely unacceptable.” No one disputes the gravity or turpitude of the problem. And yet, under this narrative, the telecommunications industry and European citizens are expected to absorb dangerous risk-mitigation measures that are likely to involve lost privacy for citizens and widespread monitoring powers for the state.

The cost, we are told, is nothing compared to the benefit.

After all, who wouldn’t want to fight child sexual abuse? It’s high time to take a deep breath. Child abusers should be punished severely. This does not dispense a free society from respecting other core values.

But, wait. There’s more…

Keep reading

Substack expands censorship to Australian users

Last week, we noted that Substack had caved into the UK censorship regime and was restricting the content that UK users can access unless they verified their age with either a selfie or a government-approved ID.

Age verification is not about keeping children “safe,” it is about control: age verification online is increasingly being integrated with digital ID systems, particularly through government-backed digital identity wallets, and is becoming a foundational component of digital ID systems with several countries, including the US, European Union member states, the UK and Australia, advancing digital ID frameworks where age verification is a core function. 

For example, the GOV.UK Wallet is under development and will be used for identity verification, with age verification being a key application. And in Australia, the Digital ID Act 2024 established the Australian Government Digital ID System, allows users to prove identity online.

The example we used in our previous article to demonstrate the type of content being censored for UK users on Substack, unless we comply with the rolling out of the digital ID agenda, was the article ‘UK’s open border policy is not normal; nor is it acceptable’.

Along similar lines, yesterday, a Substack user re-stacked our article ‘London Primary school teacher is banned from working with children for telling a Muslim pupil that Britain is a Christian country’.  Substack has censored the article for non-paying users who have not complied with age verification.

Keep reading

House Lawmakers Unite in Moral Panic, Advancing 18 “Kids’ Online Safety” Bills That Expand Surveillance and Weaken Privacy

The House Energy & Commerce Subcommittee on Commerce, Manufacturing, and Trade spent its latest markup hearing on Thursday proving that if there’s one bipartisan passion left in Washington, it’s moral panic about the internet.

Eighteen separate bills on “kids’ online safety” were debated, amended, and then promptly advanced to the full committee. Not one was stopped.

Ranking Member Jan Schakowsky (D) set the tone early, describing the bills as “terribly inadequate” and announcing she was “furious.”

She complained that the package “leaves out the big issues that we are fighting for.” If it’s not clear, Schakowsky is complaining that the already-controversial bills don’t go far enough.

Eighteen bills now move forward, eight of which hinge on some form of age verification, which would likely require showing a government ID. Three: App Store Accountability (H.R. 3149), the SCREEN Act (H.R. 1623), and the Parents Over Platforms Act (H.R. 6333), would require it outright.

The other five rely on what lawmakers call the “actual knowledge” or “willful disregard” standards, which sound like legalese but function as a dare to platforms: either know everyone’s age, or risk a lawsuit.

The safest corporate response, of course, would be to treat everyone as a child until they’ve shown ID.

Keep reading

Berlin Approves New Expansion of Police Surveillance Powers

Berlin’s regional parliament has passed a far-reaching overhaul of its “security” law, giving police new authority to conduct both digital and physical surveillance.

The CDU-SPD coalition, supported by AfD votes, approved the reform of the General Security and Public Order Act (ASOG), changing the limits that once protected Berliners from intrusive policing.

Interior Senator Iris Spranger (SPD) argued that the legislation modernizes police work for an era of encrypted communication, terrorism, and cybercrime. But it undermines core civil liberties and reshapes the relationship between citizens and the state.

One of the most controversial elements is the expansion of police powers under paragraphs 26a and 26b. These allow investigators to hack into computers and smartphones under the banner of “source telecommunications surveillance” and “online searches.”

Police may now install state-developed spyware, known as trojans, on personal devices to intercept messages before or after encryption.

If the software cannot be deployed remotely, the law authorizes officers to secretly enter a person’s home to gain access.

This enables police to install surveillance programs directly on hardware without the occupant’s knowledge. Berlin had previously resisted such practices, but now joins other federal states that permit physical entry to install digital monitoring tools.

Keep reading

Liberals want to control what you watch online

New regulations from the Liberal Government’s Canadian Radio-television and Telecommunications Commission (CRTC) are trying to apply ‘Canadian content’ (CanCon) requirements to online platforms like YouTube and Spotify.

What could this mean for your online experience?

Will content that the Government doesn’t designate as sufficiently ‘Canadian’ disappear from your streaming platforms? Could companies like Netflix decide to pull out of Canada altogether rather than try to comply with onerous requirements?

Host Kris Sims is joined by longtime journalist and former CRTC vice-chair Peter Menzies to discuss what it all means.

Keep reading

Australia launches youth social media ban it says will be the world’s ‘first domino’

Can children and teenagers be forced off social media en masse? Australia is about to find out.

More than 1 million social media accounts held by users under 16 are set to be deactivated in Australia on Wednesday in a divisive world-first ban that has inflamed a culture war and is being closely watched in the United States and elsewhere.

Social media companies will have to take “reasonable steps” to ensure that under-16s in Australia cannot set up accounts on their platforms and that existing accounts are deactivated or removed.

Australian officials say the landmark ban, which lawmakers swiftly approved late last year, is meant to protect children from addictive social media platforms that experts say can be disastrous for their mental health.

“With one law, we can protect Generation Alpha from being sucked into purgatory by predatory algorithms described by the man who created the feature as ‘behavioral cocaine,’” Communications Minister Anika Wells told the National Press Club in Canberra last week.

While many parents and even their children have welcomed the ban, others say it will hinder young people’s ability to express themselves and connect with others, as well as access online support that is crucial for those from marginalized groups or living in isolated parts of rural Australia. Two 15-year-olds have brought a legal challenge against it to the nation’s highest court.

Supporters say the rest of the world will soon follow the example set by the Australian ban, which faced fierce resistance from social media companies.

“I’ve always referred to this as the first domino, which is why they pushed back,” Julie Inman Grant, who regulates online safety as Australia’s eSafety Commissioner, said at an event in Sydney last week.

Keep reading

Australian Leaders and Legacy Media Celebrates Launch of Online Digital ID Age Verification Law

It was sold as a “historic day,” the kind politicians like to frame with national pride and moral purpose.

Cameras flashed in Canberra as Australia’s Prime Minister Anthony Albanese stood at the podium, declaring victory in the fight to “protect children.”

What Australians actually got was a nationwide digital ID system. Starting December 10, every citizen logging into select online platforms must now pass through digital ID verification, biometric scans, face matching, and document checks, all justified as a way to keep under-16s off social media.

Kids are now banned from certain platforms, but it’s the adults who must hand over their faces, IDs, and biometric data to prove they’re not kids.

“Protecting children” has been converted into a universal surveillance upgrade for everyone.

According to Albanese, who once said if he became a dictator the first thing he would do was ban social media, the Online Safety Amendment (Social Media Minimum Age) Bill 2024 will “change lives.”

He described it as a “profound reform” that will “reverberate around the world,” giving parents “peace of mind” and inspiring “the global community” to copy Australia’s example.

The Prime Minister’s pride, he said, had “never been greater.” Listening to him, you’d think he’d cured cancer rather than making face scans mandatory to log in to Facebook.

Keep reading

This FTC Workshop Could Legitimize the Push for Online Digital ID Checks

In January 2026, the Federal Trade Commission plans to gather a small army of “experts” in Washington to discuss a topic that sounds technical but reads like a blueprint for a new kind of internet.

Officially, the event is about protecting children. Unofficially, it’s about identifying everyone.

The FTC says the January 28 workshop at the Constitution Center will bring together researchers, policy officials, tech companies, and “consumer representatives” to explore the role of age verification and its relationship to the Children’s Online Privacy Protection Act, or COPPA.

It’s all about collecting and verifying age information, developing technical systems for estimation, and scaling those systems across digital environments.

In government language, that means building tools that could determine who you are before you click anything.

The FTC suggests this is about safeguarding minors. But once these systems exist, they rarely stop where they start. The design of a universal age-verification network could reach far beyond child safety, extending into how all users identify themselves across websites, platforms, and services.

The agency’s agenda suggests a framework for what could become a credential-based web. If a website has to verify your age, it must verify you. And once verified, your information doesn’t evaporate after you log out. It’s stored somewhere, connected to something, waiting for the next access request.

The federal effort comes after a wave of state-level enthusiasm for the same idea. TexasUtahMissouriVirginia, and Ohio have each passed laws forcing websites to check the ages of users, often borrowing language directly from the European UnionAustralia, and the United Kingdom. Those rules require identity documents, biometric scans, or certified third parties that act as digital hall monitors.

In these states, “click to enter” has turned into “show your papers.”

Many sites now require proof of age, while others test-drive digital ID programs linking personal credentials to online activity.

The result is a slow creep toward a system where logging into a website looks a lot like crossing a border.

Keep reading