Privacy For The Powerful, Surveillance For The Rest: EU’s Proposed Tech Regulation Goes Too Far

Last month, we lamented California’s Frontier AI Act of 2025. The Act favors compliance over risk management, while shielding bureaucrats and lawmakers from responsibility. Mostly, it imposes top-down regulatory norms, instead of letting civil society and industry experts experiment and develop ethical standards from the bottom up.

Perhaps we could dismiss the Act as just another example of California’s interventionist penchant. But some American politicians and regulators are already calling for the Act to be a “template for harmonizing federal and state oversight.” The other source for that template would be the European Union (EU), so it’s worth keeping an eye on the regulations spewed out of Brussels.

The EU is already way ahead of California in imposing troubling, top-down regulation. Indeed, the EU Artificial Intelligence Act of 2024 follows the EU’s overall precautionary principle. As the EU Parliament’s internal think tank explains, “the precautionary principle enables decision-makers to adopt precautionary measures when scientific evidence about an environmental or human health hazard is uncertain and the stakes are high.” The precautionary principle gives immense power to the EU when it comes to regulating in the face of uncertainty — rather than allowing for experimentation with the guardrails of fines and tort law (as in the US). It stifles ethical learning and innovation. Because of the precautionary principle and associated regulation, the EU economy suffers from greater market concentration, higher regulatory compliance costs, and diminished innovation — compared to an environment that allows for experimentation and sensible risk management. It is small wonder that only four of the world’s top 50 tech companies are European.

From Stifled Innovation to Stifled Privacy

Along with the precautionary principle, the second driving force behind EU regulation is the advancement of rights — but cherry-picking from the EU Charter of Fundamental Rights of rights that often conflict with others. For example, the EU’s General Data Protection Regulation (GDPR) of 2016 was imposed with the idea of protecting a fundamental right to personal data protection (this is technically separate from the right to privacy, and gives the EU much more power to intervene — but that is the stuff of academic journals). The GDPR ended up curtailing the right to economic freedom.

This time, fundamental rights are being deployed to justify the EU’s fight against child sexual abuse. We all love fundamental rights, and we all hate child abuse. But, over the years, fundamental rights have been deployed as a blunt and powerful weapon to expand the EU’s regulatory powers. The proposed Child Sex Abuse regulation (CSA) is no exception. What is exceptional, is the extent of the intrusion: the EU is proposing to monitor communications among European citizens, lumping them all together as potential threats rather than as protected speech that enjoys a prima facie right to privacy.

As of 26 November 2025, the EU bureaucratic machine has been negotiating the details of the CSA. In the latest draft, mandatory scanning of private communications has thankfully been removed, at least formally. But there is a catch. Providers of hosting and interpersonal communication services must identify, analyze, and assess how their services might be used for online child sexual abuse, and then take “all reasonable mitigation measures.” Faced with such an open-ended mandate and the threat of liability, many providers may conclude that the safest — and most legally prudent — way to show they have complied with the EU directive is to deploy large-scale scanning of private communications.

The draft CSA insists that mitigation measures should, where possible, be limited to specific parts of the service or specific groups of users. But the incentive structure points in one direction. Widespread monitoring may end up as the only viable option for regulatory compliance. What is presented as voluntary today risks becoming a de facto obligation tomorrow.

In the words of Peter Hummelgaard, the Danish Minister of Justice: “Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse. This is completely unacceptable.” No one disputes the gravity or turpitude of the problem. And yet, under this narrative, the telecommunications industry and European citizens are expected to absorb dangerous risk-mitigation measures that are likely to involve lost privacy for citizens and widespread monitoring powers for the state.

The cost, we are told, is nothing compared to the benefit.

After all, who wouldn’t want to fight child sexual abuse? It’s high time to take a deep breath. Child abusers should be punished severely. This does not dispense a free society from respecting other core values.

But, wait. There’s more…

Keep reading

Substack expands censorship to Australian users

Last week, we noted that Substack had caved into the UK censorship regime and was restricting the content that UK users can access unless they verified their age with either a selfie or a government-approved ID.

Age verification is not about keeping children “safe,” it is about control: age verification online is increasingly being integrated with digital ID systems, particularly through government-backed digital identity wallets, and is becoming a foundational component of digital ID systems with several countries, including the US, European Union member states, the UK and Australia, advancing digital ID frameworks where age verification is a core function. 

For example, the GOV.UK Wallet is under development and will be used for identity verification, with age verification being a key application. And in Australia, the Digital ID Act 2024 established the Australian Government Digital ID System, allows users to prove identity online.

The example we used in our previous article to demonstrate the type of content being censored for UK users on Substack, unless we comply with the rolling out of the digital ID agenda, was the article ‘UK’s open border policy is not normal; nor is it acceptable’.

Along similar lines, yesterday, a Substack user re-stacked our article ‘London Primary school teacher is banned from working with children for telling a Muslim pupil that Britain is a Christian country’.  Substack has censored the article for non-paying users who have not complied with age verification.

Keep reading

House Lawmakers Unite in Moral Panic, Advancing 18 “Kids’ Online Safety” Bills That Expand Surveillance and Weaken Privacy

The House Energy & Commerce Subcommittee on Commerce, Manufacturing, and Trade spent its latest markup hearing on Thursday proving that if there’s one bipartisan passion left in Washington, it’s moral panic about the internet.

Eighteen separate bills on “kids’ online safety” were debated, amended, and then promptly advanced to the full committee. Not one was stopped.

Ranking Member Jan Schakowsky (D) set the tone early, describing the bills as “terribly inadequate” and announcing she was “furious.”

She complained that the package “leaves out the big issues that we are fighting for.” If it’s not clear, Schakowsky is complaining that the already-controversial bills don’t go far enough.

Eighteen bills now move forward, eight of which hinge on some form of age verification, which would likely require showing a government ID. Three: App Store Accountability (H.R. 3149), the SCREEN Act (H.R. 1623), and the Parents Over Platforms Act (H.R. 6333), would require it outright.

The other five rely on what lawmakers call the “actual knowledge” or “willful disregard” standards, which sound like legalese but function as a dare to platforms: either know everyone’s age, or risk a lawsuit.

The safest corporate response, of course, would be to treat everyone as a child until they’ve shown ID.

Keep reading

Berlin Approves New Expansion of Police Surveillance Powers

Berlin’s regional parliament has passed a far-reaching overhaul of its “security” law, giving police new authority to conduct both digital and physical surveillance.

The CDU-SPD coalition, supported by AfD votes, approved the reform of the General Security and Public Order Act (ASOG), changing the limits that once protected Berliners from intrusive policing.

Interior Senator Iris Spranger (SPD) argued that the legislation modernizes police work for an era of encrypted communication, terrorism, and cybercrime. But it undermines core civil liberties and reshapes the relationship between citizens and the state.

One of the most controversial elements is the expansion of police powers under paragraphs 26a and 26b. These allow investigators to hack into computers and smartphones under the banner of “source telecommunications surveillance” and “online searches.”

Police may now install state-developed spyware, known as trojans, on personal devices to intercept messages before or after encryption.

If the software cannot be deployed remotely, the law authorizes officers to secretly enter a person’s home to gain access.

This enables police to install surveillance programs directly on hardware without the occupant’s knowledge. Berlin had previously resisted such practices, but now joins other federal states that permit physical entry to install digital monitoring tools.

Keep reading

Liberals want to control what you watch online

New regulations from the Liberal Government’s Canadian Radio-television and Telecommunications Commission (CRTC) are trying to apply ‘Canadian content’ (CanCon) requirements to online platforms like YouTube and Spotify.

What could this mean for your online experience?

Will content that the Government doesn’t designate as sufficiently ‘Canadian’ disappear from your streaming platforms? Could companies like Netflix decide to pull out of Canada altogether rather than try to comply with onerous requirements?

Host Kris Sims is joined by longtime journalist and former CRTC vice-chair Peter Menzies to discuss what it all means.

Keep reading

Australia launches youth social media ban it says will be the world’s ‘first domino’

Can children and teenagers be forced off social media en masse? Australia is about to find out.

More than 1 million social media accounts held by users under 16 are set to be deactivated in Australia on Wednesday in a divisive world-first ban that has inflamed a culture war and is being closely watched in the United States and elsewhere.

Social media companies will have to take “reasonable steps” to ensure that under-16s in Australia cannot set up accounts on their platforms and that existing accounts are deactivated or removed.

Australian officials say the landmark ban, which lawmakers swiftly approved late last year, is meant to protect children from addictive social media platforms that experts say can be disastrous for their mental health.

“With one law, we can protect Generation Alpha from being sucked into purgatory by predatory algorithms described by the man who created the feature as ‘behavioral cocaine,’” Communications Minister Anika Wells told the National Press Club in Canberra last week.

While many parents and even their children have welcomed the ban, others say it will hinder young people’s ability to express themselves and connect with others, as well as access online support that is crucial for those from marginalized groups or living in isolated parts of rural Australia. Two 15-year-olds have brought a legal challenge against it to the nation’s highest court.

Supporters say the rest of the world will soon follow the example set by the Australian ban, which faced fierce resistance from social media companies.

“I’ve always referred to this as the first domino, which is why they pushed back,” Julie Inman Grant, who regulates online safety as Australia’s eSafety Commissioner, said at an event in Sydney last week.

Keep reading

Australian Leaders and Legacy Media Celebrates Launch of Online Digital ID Age Verification Law

It was sold as a “historic day,” the kind politicians like to frame with national pride and moral purpose.

Cameras flashed in Canberra as Australia’s Prime Minister Anthony Albanese stood at the podium, declaring victory in the fight to “protect children.”

What Australians actually got was a nationwide digital ID system. Starting December 10, every citizen logging into select online platforms must now pass through digital ID verification, biometric scans, face matching, and document checks, all justified as a way to keep under-16s off social media.

Kids are now banned from certain platforms, but it’s the adults who must hand over their faces, IDs, and biometric data to prove they’re not kids.

“Protecting children” has been converted into a universal surveillance upgrade for everyone.

According to Albanese, who once said if he became a dictator the first thing he would do was ban social media, the Online Safety Amendment (Social Media Minimum Age) Bill 2024 will “change lives.”

He described it as a “profound reform” that will “reverberate around the world,” giving parents “peace of mind” and inspiring “the global community” to copy Australia’s example.

The Prime Minister’s pride, he said, had “never been greater.” Listening to him, you’d think he’d cured cancer rather than making face scans mandatory to log in to Facebook.

Keep reading

This FTC Workshop Could Legitimize the Push for Online Digital ID Checks

In January 2026, the Federal Trade Commission plans to gather a small army of “experts” in Washington to discuss a topic that sounds technical but reads like a blueprint for a new kind of internet.

Officially, the event is about protecting children. Unofficially, it’s about identifying everyone.

The FTC says the January 28 workshop at the Constitution Center will bring together researchers, policy officials, tech companies, and “consumer representatives” to explore the role of age verification and its relationship to the Children’s Online Privacy Protection Act, or COPPA.

It’s all about collecting and verifying age information, developing technical systems for estimation, and scaling those systems across digital environments.

In government language, that means building tools that could determine who you are before you click anything.

The FTC suggests this is about safeguarding minors. But once these systems exist, they rarely stop where they start. The design of a universal age-verification network could reach far beyond child safety, extending into how all users identify themselves across websites, platforms, and services.

The agency’s agenda suggests a framework for what could become a credential-based web. If a website has to verify your age, it must verify you. And once verified, your information doesn’t evaporate after you log out. It’s stored somewhere, connected to something, waiting for the next access request.

The federal effort comes after a wave of state-level enthusiasm for the same idea. TexasUtahMissouriVirginia, and Ohio have each passed laws forcing websites to check the ages of users, often borrowing language directly from the European UnionAustralia, and the United Kingdom. Those rules require identity documents, biometric scans, or certified third parties that act as digital hall monitors.

In these states, “click to enter” has turned into “show your papers.”

Many sites now require proof of age, while others test-drive digital ID programs linking personal credentials to online activity.

The result is a slow creep toward a system where logging into a website looks a lot like crossing a border.

Keep reading

The EU Insists Its X Fine Isn’t About Censorship. Here’s Why It Is.

When the European Commission fined X €120 million on December 5, officials could not have been clearer. This, they said, was not about censorship. It was just about “transparency.”

They repeat it so often you start to wonder why.

The fine marks the first major enforcement of the Digital Services Act, Europe’s new censorship-driven internet rulebook.

It was sold as a consumer protection measure, designed to make online platforms safer and more accountable, and included a whole list of censorship requirements, fining platforms that don’t comply.

The first target is Elon Musk’s X, and the list of alleged violations look less like user safety concerns and more like a blueprint for controlling who gets heard, who gets trusted, and who gets to talk back.

The Commission charged X with three violations: the paid blue checkmark system, the lack of advertising data, and restricted data access for researchers.

None of these touches direct content censorship. But all of them shape visibility, credibility, and surveillance, just in more polite language.

Musk’s decision to turn blue checks into a subscription feature ended the old system where establishment figures, journalists, politicians, and legacy celebrities got verification.

The EU called Musk’s decision “deceptive design.” The old version, apparently, was honesty itself. Before, a blue badge meant you were important. After, it meant you paid. Brussels prefers the former, where approved institutions get algorithmic priority, and the rest of the population stays in the queue.

The new system threatened that hierarchy. Now, anyone could buy verification, diluting the aura of authority once reserved for anointed voices.

However, that’s not the full story. Under the old Twitter system, verification was sold as a public service, but in reality it worked more like a back-room favor and a status purchase.

The main application process was shut down in 2010, so unless you were already famous, the only way to get a blue check was to spend enough money on advertising or to be important enough to trigger impersonation problems.

Ad Age reported that advertisers who spent at least fifteen thousand dollars over three months could get verified, and Twitter sales reps told clients the same thing. That meant verification was effectively a perk reserved for major media brands, public figures, and anyone willing to pay. It was a symbol of influence rationed through informal criteria and private deals, creating a hierarchy shaped by cronyism rather than transparency.

Keep reading

Wizz, the ‘Tinder for kids’ app, exposes kids to predators. Congress must act.

An app called “Wizz” has been making headlines lately for connecting minors with sexual predators. Many have described this app as a “Tinder for kids.” It’s the same iconic swipe right-swipe left functionality, and the same purpose of meeting up with strangers — only this time, targeted at both teens and adults.  

What’s the result of this app design? A12-year-old girl meeting up with a supposed 14-year-old boy that Wizz connected her with … only to discover the “boy” was an adult male, who sexually assaulted her. 

An 8th grader being sexually abused by a 27-year-old man, then finding out she was only one of several underage girls he had groomed through Wizz.

An 11-year-old girl being sexually assaulted by a U.S. Marine she met on Wizz. 

All this in the last year alone. And there are many more cases.  

As reports of Wizz facilitating child sexual abuse continue to pile up, something must change with the app itself and more broadly when it comes to online child safety.   

Just a few years back, the National Center on Sexual Exploitation urged app stores to remove Wizz on account of the rampant sexual exploitation occurring on the platform. Within 36 hours, both Google Play and the Apple App Store agreed. Later on, Wizz was reinstated, with what appeared to be a number of new safety tools.  

As time went by, however, it became abundantly clear that Wizz was not as safe as it seemed. If the continued reports of sexual exploitation weren’t convincing enough, the New York Post reported on what happened when the company’s safety tools were directly pressure tested. 

Although Wizz claims to have robust age verification, a 52-year-old man said he was able to create an account as a 15-year-old. How? Because even though the age verification tech flagged this man’s profile for review, he said that Wizz moderators went ahead and approved it within minutes.

This is even worse than not having any age verification to begin with.

Wizz made claims of safety by boasting about tools like age verification — but behind the scenes, they actually directly overrode the concerns flagged by these tools.

Keep reading