Privacy For The Powerful, Surveillance For The Rest: EU’s Proposed Tech Regulation Goes Too Far

Last month, we lamented California’s Frontier AI Act of 2025. The Act favors compliance over risk management, while shielding bureaucrats and lawmakers from responsibility. Mostly, it imposes top-down regulatory norms, instead of letting civil society and industry experts experiment and develop ethical standards from the bottom up.

Perhaps we could dismiss the Act as just another example of California’s interventionist penchant. But some American politicians and regulators are already calling for the Act to be a “template for harmonizing federal and state oversight.” The other source for that template would be the European Union (EU), so it’s worth keeping an eye on the regulations spewed out of Brussels.

The EU is already way ahead of California in imposing troubling, top-down regulation. Indeed, the EU Artificial Intelligence Act of 2024 follows the EU’s overall precautionary principle. As the EU Parliament’s internal think tank explains, “the precautionary principle enables decision-makers to adopt precautionary measures when scientific evidence about an environmental or human health hazard is uncertain and the stakes are high.” The precautionary principle gives immense power to the EU when it comes to regulating in the face of uncertainty — rather than allowing for experimentation with the guardrails of fines and tort law (as in the US). It stifles ethical learning and innovation. Because of the precautionary principle and associated regulation, the EU economy suffers from greater market concentration, higher regulatory compliance costs, and diminished innovation — compared to an environment that allows for experimentation and sensible risk management. It is small wonder that only four of the world’s top 50 tech companies are European.

From Stifled Innovation to Stifled Privacy

Along with the precautionary principle, the second driving force behind EU regulation is the advancement of rights — but cherry-picking from the EU Charter of Fundamental Rights of rights that often conflict with others. For example, the EU’s General Data Protection Regulation (GDPR) of 2016 was imposed with the idea of protecting a fundamental right to personal data protection (this is technically separate from the right to privacy, and gives the EU much more power to intervene — but that is the stuff of academic journals). The GDPR ended up curtailing the right to economic freedom.

This time, fundamental rights are being deployed to justify the EU’s fight against child sexual abuse. We all love fundamental rights, and we all hate child abuse. But, over the years, fundamental rights have been deployed as a blunt and powerful weapon to expand the EU’s regulatory powers. The proposed Child Sex Abuse regulation (CSA) is no exception. What is exceptional, is the extent of the intrusion: the EU is proposing to monitor communications among European citizens, lumping them all together as potential threats rather than as protected speech that enjoys a prima facie right to privacy.

As of 26 November 2025, the EU bureaucratic machine has been negotiating the details of the CSA. In the latest draft, mandatory scanning of private communications has thankfully been removed, at least formally. But there is a catch. Providers of hosting and interpersonal communication services must identify, analyze, and assess how their services might be used for online child sexual abuse, and then take “all reasonable mitigation measures.” Faced with such an open-ended mandate and the threat of liability, many providers may conclude that the safest — and most legally prudent — way to show they have complied with the EU directive is to deploy large-scale scanning of private communications.

The draft CSA insists that mitigation measures should, where possible, be limited to specific parts of the service or specific groups of users. But the incentive structure points in one direction. Widespread monitoring may end up as the only viable option for regulatory compliance. What is presented as voluntary today risks becoming a de facto obligation tomorrow.

In the words of Peter Hummelgaard, the Danish Minister of Justice: “Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse. This is completely unacceptable.” No one disputes the gravity or turpitude of the problem. And yet, under this narrative, the telecommunications industry and European citizens are expected to absorb dangerous risk-mitigation measures that are likely to involve lost privacy for citizens and widespread monitoring powers for the state.

The cost, we are told, is nothing compared to the benefit.

After all, who wouldn’t want to fight child sexual abuse? It’s high time to take a deep breath. Child abusers should be punished severely. This does not dispense a free society from respecting other core values.

But, wait. There’s more…

Keep reading

Where Is Your Line In The Sand On Digital ID?

Mandatory digital ID is almost here. For years The Free Thought Project and various other independent media outlets; including but not limited to some of our colleagues such as The Conscious Resistance Network, The Last American Vagabond, James Corbett, Jason Bermas, Josh Sigurdson of World Alternative Media, Whitney Webb’s Unlimited Hangout and many more have sounded the alarm on the encroaching dangers of digital ID.

From exposing the technocratic agenda of the scamdemic era attempting to assert digital identity as a “human right” in an effort to snare much of society into a mass surveillance grid.

To the United Nations push to implement digital identity as a part of their Sustainable Development Goals (SDG 16) amid attempts to consolidate power for global governance.

And the latest attempts of the Trump administration exploiting concerns of election security as a means of ushering in digital ID domestically.

It is clear that efforts to implement this dystopian technocratic agenda are moving forward with full speed.

Earlier this year, California joined a growing list of over a dozen states offering digital drivers licenses through digital wallets such as Apple and Google.

Just recently, the popular children’s gaming platform Roblox rolled out a new mandatory facial recognition system to verify the ages of its over 36 million users.

Meanwhile, the state of Alaska recently began advancing plans of enhancing its own digital identity biometric data collection system.

In recent years one of the primary methods in which politicians have attempted to enact digital ID or similar measures has been through exploiting concerns of child safety online, thereby pushing for a series of free speech infringing, censorship inducing, age verification laws utilizing artificial intelligence and facial recognition biometrics among other things to implement such agendas.

At the same time these initiatives are sweeping their way through the country, there are currently nearly two dozen pieces of legislation individually moving their way through Congress with each one seeking to serve as the next attempt to further entrap the American people in this surveillance panopticon.

Keep reading

House Lawmakers Unite in Moral Panic, Advancing 18 “Kids’ Online Safety” Bills That Expand Surveillance and Weaken Privacy

The House Energy & Commerce Subcommittee on Commerce, Manufacturing, and Trade spent its latest markup hearing on Thursday proving that if there’s one bipartisan passion left in Washington, it’s moral panic about the internet.

Eighteen separate bills on “kids’ online safety” were debated, amended, and then promptly advanced to the full committee. Not one was stopped.

Ranking Member Jan Schakowsky (D) set the tone early, describing the bills as “terribly inadequate” and announcing she was “furious.”

She complained that the package “leaves out the big issues that we are fighting for.” If it’s not clear, Schakowsky is complaining that the already-controversial bills don’t go far enough.

Eighteen bills now move forward, eight of which hinge on some form of age verification, which would likely require showing a government ID. Three: App Store Accountability (H.R. 3149), the SCREEN Act (H.R. 1623), and the Parents Over Platforms Act (H.R. 6333), would require it outright.

The other five rely on what lawmakers call the “actual knowledge” or “willful disregard” standards, which sound like legalese but function as a dare to platforms: either know everyone’s age, or risk a lawsuit.

The safest corporate response, of course, would be to treat everyone as a child until they’ve shown ID.

Keep reading

Berlin Approves New Expansion of Police Surveillance Powers

Berlin’s regional parliament has passed a far-reaching overhaul of its “security” law, giving police new authority to conduct both digital and physical surveillance.

The CDU-SPD coalition, supported by AfD votes, approved the reform of the General Security and Public Order Act (ASOG), changing the limits that once protected Berliners from intrusive policing.

Interior Senator Iris Spranger (SPD) argued that the legislation modernizes police work for an era of encrypted communication, terrorism, and cybercrime. But it undermines core civil liberties and reshapes the relationship between citizens and the state.

One of the most controversial elements is the expansion of police powers under paragraphs 26a and 26b. These allow investigators to hack into computers and smartphones under the banner of “source telecommunications surveillance” and “online searches.”

Police may now install state-developed spyware, known as trojans, on personal devices to intercept messages before or after encryption.

If the software cannot be deployed remotely, the law authorizes officers to secretly enter a person’s home to gain access.

This enables police to install surveillance programs directly on hardware without the occupant’s knowledge. Berlin had previously resisted such practices, but now joins other federal states that permit physical entry to install digital monitoring tools.

Keep reading

Germany is Officially a Surveillance State – Civil Liberties Destroyed

Germany granted itself legal permission to use AI technology to aggressively monitor the entire population in real-time. The Berlin House of Representatives passed amendments to the General Security and Public Order Act (ASOG) that grants government access to citizens’ personal data by any means necessary, including forcibly entering their private homes.

Interior Senator Iris Spranger (SPD) declared the new laws necessary to fight terrorism in the digital age. German investigators may now legally hack IT systems, but if remote access is unavailable, authorities may “secretly enter and search premises” a suspect’s personal residence to confiscate their digital devices. The government does not need to notify citizens that they are under investigation before entering their homes without warning.

Germany will equip public spaces with advanced surveillance technology. The cell tower query will be expanded to enable the government to access data from all private mobile phones. Network operators must be able to tell the government the movement and location of all citizens. License plate scanners will be installed throughout the nation, and that data will be sent to a centralized database.

Deutschland has finally achieved official “1984” status—the nation is implementing unmanned drones to monitor the population.

All personal data may be used for “training and testing of artificial intelligence systems.” Authorities have free rein to steal data from publicly accessible websites to collect biometric comparisons of faces and voices. The government will implement automated facial recognition software that enables it to identify citizens immediately. The database will tie into the nationwide surveillance platform.

You are being watched. Civil liberties do not exist. Freedom is merely an illusion; your likeness—face, voice, movement, finances, family–exists in an ever-expanding government database that may be used however the government sees fit.

Keep reading

11 Signs That Our World Is Rapidly Becoming A Lot More Orwellian

All over the globe, the digital control grid that we are all living in just continues to get even tighter. They are using facial recognition technology to scan our faces, they are using license plate readers to track where we travel, they are systematically monitoring the conversations that we are having on our phones, and they are watching literally everything that we post on social media. At this stage, many of us just assume that nothing that we do or say is ever truly private. We really do live in a “Big Brother society”, and the potential for tyranny is off the charts. In fact, people are already getting arrested for “thought crimes” all over the world. If we do not take a stand now, someday soon we could wake up in a world where there is essentially no freedom left at all.

The exponential growth of AI technology is allowing authorities to watch, track, monitor and control us like never before.  If you are not alarmed by this, you might want to check if you are still alive.  The following are 11 signs that our world is rapidly becoming a lot more Orwellian…

#1 UK authorities are rolling out “a country-wide facial recognition system” that will use AI facial recognition cameras to watch the entire population…

On Thursday, officials in the UK pledged to roll out a country-wide facial recognition system to help police track down criminals. The country’s ministers have launched a 10-week consultation to analyze the regulatory and privacy framework of their AI-powered surveillance panopticon — but one way or another, the all-seeing eye is on its way.

There’s just one tiny wrinkle: the AI facial recognition cameras have a tendency to misidentify non-white people.

New reporting by The Guardian notes that testing of the AI tech conducted by the National Physical Laboratory (NPL) found that it‘s “more likely to incorrectly include some demographic groups in its search results” — specifically Black and Asian people.

#2 Of course the control freaks in the UK also monitor everything that gets posted on social media.  One British man recently found this out the hard way when he was arrested for posing with a legally-owned gun in the United States

A Yorkshire man was arrested over a photo he posted on social media featuring him holding a legally owned gun in the US.

Jon Richelieu-Booth posted a photo of himself in August holding a gun on LinkedIn while he was on a holiday in Florida.

He said he held the firearm lawfully, on private land and with full permission from its owner.

#3 If you do not believe that “thought crime” is real, just consider this next example.  11 police officers recently barged in and arrested a 34-year-old woman that was sitting naked in her own bathtub because she used offensive words while texting another woman on her phone…

The United Kingdom has become an authoritarian nightmare, and the United States must remain vigilant if it does not want to go down the same course.

Elizabeth Kinney, a 34-year-old care assistant, was naked in the bathtub when 11 police officers barged into her home to arrest her.

Her crime was sending insults to another woman via text.

How would you feel if 11 police officers were staring at you while you were naked?

Keep reading

This FTC Workshop Could Legitimize the Push for Online Digital ID Checks

In January 2026, the Federal Trade Commission plans to gather a small army of “experts” in Washington to discuss a topic that sounds technical but reads like a blueprint for a new kind of internet.

Officially, the event is about protecting children. Unofficially, it’s about identifying everyone.

The FTC says the January 28 workshop at the Constitution Center will bring together researchers, policy officials, tech companies, and “consumer representatives” to explore the role of age verification and its relationship to the Children’s Online Privacy Protection Act, or COPPA.

It’s all about collecting and verifying age information, developing technical systems for estimation, and scaling those systems across digital environments.

In government language, that means building tools that could determine who you are before you click anything.

The FTC suggests this is about safeguarding minors. But once these systems exist, they rarely stop where they start. The design of a universal age-verification network could reach far beyond child safety, extending into how all users identify themselves across websites, platforms, and services.

The agency’s agenda suggests a framework for what could become a credential-based web. If a website has to verify your age, it must verify you. And once verified, your information doesn’t evaporate after you log out. It’s stored somewhere, connected to something, waiting for the next access request.

The federal effort comes after a wave of state-level enthusiasm for the same idea. TexasUtahMissouriVirginia, and Ohio have each passed laws forcing websites to check the ages of users, often borrowing language directly from the European UnionAustralia, and the United Kingdom. Those rules require identity documents, biometric scans, or certified third parties that act as digital hall monitors.

In these states, “click to enter” has turned into “show your papers.”

Many sites now require proof of age, while others test-drive digital ID programs linking personal credentials to online activity.

The result is a slow creep toward a system where logging into a website looks a lot like crossing a border.

Keep reading

‘Intellexa Leaks’ Reveal Wider Reach of Predator Spyware

Highly invasive spyware from consortium led by a former senior Israeli intelligence official and sanctioned by the US government is still being used to target people in multiple countries, a joint investigation published Thursday revealed.

Inside Story in GreeceHaaretz in Israel, Swiss-based WAV Research Collective, and Amnesty International collaborated on the investigation into Intellexa Consortium, maker of Predator commercial spyware. The “Intellexa Leaks” show that clients in Pakistan – and likely also in other countries – are using Predator to spy on people, including a featured Pakistani human rights lawyer.

“This investigation provides one of the clearest and most damning views yet into Intellexa’s internal operations and technology,” said Amnesty International Security Lab technologist Jurre van Bergen.

Keep reading

WHO–Gates Unveils Blueprint For Global Digital ID, AI-Driven Surveillance, & Life-Long Vaccine Tracking For Everyone

In a document published in the October Bulletin of the World Health Organization and funded by the Gates Foundation, the World Health Organization (WHO) is proposing a globally interoperable digital-identity infrastructure that permanently tracks every individual’s vaccination status from birth.

The dystopian proposal raises far more than privacy and autonomy concerns: it establishes the architecture for government overreach, cross-domain profiling, AI-driven behavioral targeting, conditional access to services, and a globally interoperable surveillance grid tracking individuals.

It also creates unprecedented risks in data security, accountability, and mission creep, enabling a digital control system that reaches into every sector of life.

Keep reading

U.S. Tech Giants Palantir and Dataminr Embed AI Surveillance in Gaza’s Post-War Control Grid

American surveillance firms Palantir and Dataminr have inserted themselves into the U.S. military’s operations center overseeing Gaza’s reconstruction, raising alarms about a dystopian AI-driven occupation regime under the guise of Trump’s peace plan.

Since mid-October, around 200 U.S. military personnel have operated from the Civil-Military Coordination Center (CMCC) in southern Israel, roughly 20 kilometers from Gaza’s northern border. Established to implement President Donald Trump’s 20-point plan—aimed at disarming Hamas, rebuilding the Strip, and paving the way for Palestinian self-determination—the center has drawn UN Security Council endorsement.

Yet no Palestinian representatives have joined these discussions on their future. Instead, seating charts and internal presentations reveal the presence of Palantir’s “Maven Field Service Representative” and Dataminr’s branding, signaling how private U.S. tech companies are positioning to profit from the region’s devastation.

Palantir’s Maven platform, described by the U.S. military as its “AI-powered battlefield platform,” aggregates data from satellites, drones, spy planes, intercepted communications, and online sources to accelerate targeting for airstrikes and operations. Defense reports highlight how it “packages” this intelligence into searchable apps for commanders, effectively shortening the “kill chain” from identification to lethal action.

Palantir’s CTO recently touted this capability as “optimizing the kill chain.” The firm secured a $10 billion Army contract over the summer to refine Maven, which has already guided U.S. strikes in Yemen, Syria, and Iraq.

Palantir’s ties to Israel’s military run deep, formalized in a January 2024 strategic partnership for “war-related missions.” The company’s Tel Aviv office, opened in 2015, has expanded rapidly amid Israel’s Gaza operations. CEO Alex Karp has defended the commitment, declaring Palantir the first company to be “completely anti-woke” despite genocide accusations.

Keep reading