EU’s Weakened “Chat Control” Bill Still Poses Major Privacy and Surveillance Risks, Academics Warn

On November 19, the European Union stands poised to vote on one of the most consequential surveillance proposals in its digital history.

The legislation, framed as a measure to protect children online, has drawn fierce criticism from a bloc of senior European academics who argue that the proposal, even in its revised form, walks a perilous line. It invites mass surveillance under a veil of voluntarism and does so with little evidence that it will improve safety.

This latest draft of the so-called “Chat Control” law has already been softened from its original form. The Council of the European Union, facing mounting public backlash, stripped out provisions for mandatory on-device scanning of encrypted communications.

But for researchers closely following the legislation, the revised proposal is anything but a retreat.

“The proposal reinstates the option to analyze content beyond images and URLs – including text and video – and to detect newly generated CSAM,” reads the open letter, signed by 18 prominent academics from institutions such as ETH Zurich, KU Leuven, and the Max Planck Institute.

We obtained a copy of the letter for you here.

The argument, in essence, is that the Council’s latest version doesn’t eliminate the risk. It only rebrands it.

Keep reading

The Disguised Return of The EU’s Private Message Scanning Plot

A major political confrontation over online privacy is approaching as European governments prepare to decide on “Chat Control 2.0,” the European Commission’s revised proposal for monitoring private digital communications.

The plan, which could be endorsed behind closed doors, has drawn urgent warnings from Dr. Patrick Breyer, a jurist and former Member of the European Parliament, who says the draft conceals sweeping new surveillance powers beneath misleading language about “risk mitigation” and “child protection.”

In a release sent to Reclaim The Net, Breyer, long a defender of digital freedom, argues that the Commission has quietly reintroduced compulsory scanning of private messages after it was previously rejected.

He describes the move as a “deceptive sleight of hand,” insisting that it transforms a supposedly voluntary framework into a system that could compel all chat, email, and messaging providers to monitor users.

“This is a political deception of the highest order,” Breyer said.

“Following loud public protests, several member states, including Germany, the Netherlands, Poland, and Austria, said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door disguised, more dangerous, and more comprehensive than ever. The public is being played for fools.”

Under the new text, providers would be obliged to take “all appropriate risk mitigation measures” to prevent abuse on their platforms. While the Commission presents this as a flexible safety requirement, Breyer insists it is a loophole that could justify forcing companies to scan every private message, including those protected by end-to-end encryption.

“The loophole renders the much-praised removal of detection orders worthless and negates their supposed voluntary nature,” he said.

He warns that it could even lead to the introduction of “client-side scanning,” where users’ devices themselves perform surveillance before messages are sent.

Unlike the current temporary exemption known as “Chat Control 1.0,” which allows voluntary scanning of photos and videos, the new draft would open the door to text and metadata analysis. Algorithms and artificial intelligence could be deployed to monitor conversations and flag “suspicious” content.

Keep reading

Why We Have a Surveillance State

It is the inevitable consequence of our prevailing governing philosophy.

“Gentlemen do not read each other’s mail.” Henry Stimson, Secretary of State, 1929

I was upbraided recently by a dear friend for my frequent praise of outcast investor Peter Thiel over Thiel’s involvement with big data company Palantir. He forwarded me a Bloomberg article titled “Peter Thiel’s data-mining company is using War on Terror tools to track American citizens” adding: “Really scary. Not good for democracy; a better version of the Stasi’s filing system and way cheaper and more efficient.”

Increasingly, we live under the kind of comprehensive surveillance predicted by science fiction writers. But Palantir is just an arms merchant, not the architect of our brave new world. Like gun manufacturers, its products can be used for good or evil.  I have always believed that moral responsibility lies with the wielder of weapons, not the manufacturers. (This is often expressed as “Guns don’t kill people, people kill people.”)

Peter Thiel’s choice to become an arms merchant rather than invest his considerable talents and fortune elsewhere is a fair question given his libertarian leanings. I have no insight into the answer. I would guess that he founded Palantir as an act of patriotism after 9/11, and it metastasized following the money, cash being the mother’s milk of the state, something the celebrated Alexander Hamilton deeply understood.

Surveillance Is Not the Problem, but It Is a Symptom

The real threat to the republic, however, lies not in the weapons available but in the unlimited and unaccountable bureaucracy in Washington that deploys them, both at home and abroad. Having broken free of constitutional constraints, America’s political class now directs an all-powerful state that naturally adopts every tool technology has to offer.

Because our prevailing governing philosophy acknowledges no limits to the doing of good or the thwarting of evil, any means necessary may be employed as long as worthy ends can be plausibly asserted. Evil must be discouraged, taxed, or outlawed; good must be encouraged, subsidized, or made mandatory. This progressive government mission must be implemented in the public square, in the marketplace, in our educational institutions, around the world, and in our homes until all forms of social injustice are eliminated.

Keep reading

German States Expand Police Powers to Train AI Surveillance Systems with Personal Data

Several German states are preparing to widen police powers by allowing personal data to be used in the training of surveillance technologies.

North Rhine-Westphalia and Baden-Württemberg are introducing legislative changes that would let police feed identifiable information such as names and facial images into commercial AI systems.

Both drafts permit this even when anonymization or pseudonymization is bypassed because the police consider it “impossible” or achievable only with “disproportionate effort.”

Hamburg adopted similar rules earlier this year, and its example appears to have encouraged other regions to follow. These developments together mark a clear move toward normalizing the use of personal information as fuel for surveillance algorithms.

The chain reaction began in Bavaria, where police in early 2024 tested Palantir’s surveillance software with real personal data.

The experiment drew objections from the state’s data protection authority, but still served as a model for others.

Hamburg used the same idea in January 2025 to amend its laws, granting permission to train “learning IT systems” on data from bystanders. Now Baden-Württemberg and North Rhine-Westphalia plan to adopt nearly identical language.

In North Rhine-Westphalia, police would be allowed to upload clear identifiers such as names or faces into commercial systems like Palantir’s and to refine behavioral or facial recognition programs with real, unaltered data.

Bettina Gayk, the state’s data protection officer, warned that “the proposed regulation addresses significant constitutional concerns.”

She argued that using data from people listed as victims or complainants was excessive and added that “products from commercial providers are improved with the help of state-collected and stored data,” which she found unacceptable.

The state government has embedded this expansion of surveillance powers into a broader revision of the Police Act, a change initially required by the Federal Constitutional Court.

The court had previously ruled that long-term video monitoring under the existing law violated the Basic Law.

Instead of narrowing these powers, the new draft introduces a clause allowing police to “develop, review, change or train IT products” with personal data.

This wording effectively enables continued use of Palantir’s data analysis platform while avoiding the constitutional limits the court demanded.

Across North Rhine-Westphalia, Baden-Württemberg, and Hamburg, the outcome will be similar: personal data can be used for training as soon as anonymization is judged to be disproportionately difficult, with the assessment left to police discretion.

Gayk has urged that the use of non-anonymized data be prohibited entirely, warning that the exceptions are written so broadly that “they will ultimately not lead to any restrictions in practice.”

Baden-Württemberg’s green-black coalition plans to pass its bill this week.

Keep reading

ICE to Deploy Palantir’s ImmigrationOS AI to Track Migrants’ Movements

U.S. Immigration and Customs Enforcement is moving forward with ImmigrationOS, a new AI system built by Palantir Technologies to give officers near real-time visibility into immigrants’ movements and sharpen enforcement priorities nationwide. The agency awarded Palantir a $30 million contract in early 2025, with a working prototype due by September 25, 2025 and an initial operating period of at least two years, according to agency planning documents and contract disclosures. ICE frames the system as a way to speed removals of people already prioritized for enforcement, better track self-deportations, and coordinate federal data that now sits in disconnected silos.

What ImmigrationOS is meant to do

ImmigrationOS is designed to pull together a wide range of government-held records to sort, flag, and route cases to officers in the field. ICE officials say the tool will help them focus on individuals linked to transnational criminal organizationsviolent offenders, documented gang members, and those who have overstayed visas.

The system is also built to register when people leave the United States on their own, so field offices can avoid wasted detention and travel costs on cases that no longer require action. While the agency describes the platform as a needed modernization step, civil liberties groups warn that an AI-driven system with sweeping data inputs risks mistakes that could touch the lives of lawful residents and even U.S. citizens.

Keep reading

Time to Pay Attention: Europe Just Eviscerated Monetary Privacy, and It’s Coming Here Next

By 2027, the European Union will have completed the most invasive overhaul of its financial system in modern history. Under Regulation (EU) 2024/1624, cash transactions above €10,000 will be illegal—no matter if it’s a private sale, a used car, or a family heirloom. 

“Persons trading in goods or providing services may accept or make a payment in cash only up to an amount of EUR 10 000 or the equivalent in national or foreign currency, whether the transaction is carried out in a single operation or in several linked operations which appear to be linked.” — Regulation (EU) 2024/1624, Article 80, paragraph 1

Simultaneously, the Markets in Crypto-Assets Regulation (MiCA) forces all crypto service providers to implement full-blown surveillance via mandatory identity verification and reporting. An anonymous Bitcoin transfer? That window is closing. And rounding out the trifecta is the European Central Bank’s digital euro, which promises privacy—just not too much of it.

This isn’t a proposal. It’s happening. And if you think it’s just about catching criminals, you haven’t been paying attention.

The justification, as always, is safety. European officials cite €700 billion in annual money laundering as the reason for the crackdown, framing the new rules as a bold stand against crime and corruption. But what they’re building isn’t a net—it’s a cage. These laws don’t distinguish between a cartel kingpin and a retiree who prefers cash. They treat every transaction like a threat, every citizen like a suspect, and every private interaction as a problem to be solved by surveillance.

Keep reading

UK Crime Agency Backs “Upload Prevention” Plan to Scan Encrypted Messages

Britain’s Internet Watch Foundation (IWF) has decided that privacy needs a chaperone.

The group has launched a campaign urging tech companies to install client-side scanning in encrypted apps, a proposal that would make every private message pass through a local checkpoint before being sent.

The IWF calls it an “upload prevention” system. Critics might call it the end of private communication disguised as a safety feature.

Under the plan, every file or image shared on a messaging app would be checked for sexual abuse material (CSAM).

The database would be maintained by what the IWF describes as a “trusted body.” If a match is found, the upload is blocked before encryption can hide it. The pitch is that nothing leaves the device unless it’s cleared, but that is like claiming a home search is fine as long as the police do not take anything.

As has been shown in Germany, this technology would not only catch criminals. Hashing errors and false positives happen, which means lawful material could be stopped before it ever leaves a phone.

And once the scanning infrastructure is built, there is nothing stopping it from being redirected toward new categories of “harmful” or “illegal” content. The precedent would be set: your phone would no longer be a private space.

Although the IWF is running this show, it has plenty of political muscle cheering it on.

Safeguarding Minister Jess Phillips praised the IWF campaign, saying: “It is clear that the British public want greater protections for children online and we are working with technology companies so more can be done to keep children safer. The design choices of platforms cannot be an excuse for failing to respond to the most horrific crimes…If companies don’t comply with the Online Safety Act they will face enforcement from the regulator. Through our action we now have an opportunity to make the online world safer for children, and I urge all technology companies to invest in safeguards so that children’s safety comes first.”

That endorsement matters. It signals that the government is ready to use the already-controversial Online Safety Act to pressure companies into surveillance compliance.

Ofcom, armed with new regulatory powers under that Act, can make “voluntary” ideas mandatory with little more than a memo.

The UK’s approach to online regulation is becoming increasingly invasive. The government recently tried to compel Apple to install a back door into its encrypted iCloud backups under the Investigatory Powers Act. Apple refused and instead pulled its most secure backup option from British users, leaving the country with weaker privacy than nearly anywhere else in the developed world.

Keep reading

Europe’s AI Surveillance Race Against the Rules That Protect Privacy

Europol’s deputy executive director, Jürgen Ebner, is urging the European Union to relax its own legal restraints on artificial intelligence, arguing that the rules designed to protect citizens are slowing down police innovation.

He wants a system that allows the agency to skip lengthy rights checks in “emergency” situations and move ahead with new AI tools before the usual data protection reviews are complete.

Ebner told POLITICO that criminals are having “the time of their life” with “their malicious deployment of AI,” while Europol faces months of delay because of required legal assessments.

Those safeguards, which include evaluations under the GDPR and the EU’s AI Act, exist to stop unaccountable automation from taking hold in law enforcement.

Yet Ebner’s comments reveal a growing tendency inside the agency to treat those same checks as obstacles rather than vital protections.

He said the current process can take up to eight months and claimed that speeding it up could save lives.

But an “emergency” fast track for AI surveillance carries an obvious danger. Once such shortcuts are created, the idea of what qualifies as an emergency can expand quickly.

Technologies that monitor, predict, or profile people can then slip beyond their intended use, leaving citizens exposed to automated systems that make judgments about them without transparency or recourse.

Over the past decade, Europol has steadily increased its technical capabilities, investing heavily in large-scale data analysis and decryption tools.

These systems are presented as essential for fighting cross-border crime, yet they also consolidate immense quantities of personal data under centralized control.

Without strong oversight, such tools can move from focused investigation toward widespread data collection and surveillance.

European Commission President Ursula von der Leyen has already promised to double Europol’s workforce and turn it into a central hub for combating organized crime, “navigating constantly between the physical and digital worlds.”

A legislative proposal to strengthen the agency’s powers is planned for 2026, raising questions about how much authority and access to data Europol will ultimately gain.

Ebner, who oversees governance at Europol, said that “almost all investigations” now involve the internet and added that the cost of technology has become a “massive burden on law enforcement agencies.”

He urged stronger collaboration with private technology firms, stating that “artificial intelligence is extremely costly. Legal decryption platforms are costly. The same is to be foreseen already for quantum computing.”

Keep reading

EU to establish ‘Ministry of Truth’ – Guardian

The European Union is planning to launch a centralized hub for monitoring and countering what it calls foreign “disinformation,” according to a leaked document seen by the Guardian. Critics have long warned that Brussels’ initiatives amount to the institutionalization of a censorship regime.

According to the European Commission proposal, set to be published on November 12, the so-called Centre for Democratic Resilience will function as part of a broader “democracy shield” strategy, pitched by Commission President Ursula von der Leyen ahead of the 2024 European elections.

Participation in the center will be voluntary, and the Commission has welcomed “like-minded partners” outside the bloc, including the UK and countries seeking accession.

The draft accuses Russia of escalating “hybrid attacks” by disseminating false narratives, while also pointing to China as another threat – alleging that Beijing uses PR firms and social media influencers to advance its interests across Europe.

Keep reading

Court Keeps California’s Online ID Law Dream Alive

The Ninth Circuit Court of Appeals has declined to rehear NetChoice v. Bonta, leaving intact its earlier decision that upheld most of California’s new social media law, Senate Bill 976, also known as the Protecting Our Kids from Social Media Addiction Act.

NetChoice, the tech trade group behind the challenge, said it “will explore all available options to protect free speech and privacy online” after the denial of its petition for rehearing on November 6, 2025.

Governor Gavin Newsom signed SB 976 into law in September 2024.

The legislation compels social media platforms to implement “age assurance” measures to identify whether users are adults or minors.

This would likely mean platforms have to introduce some form of digital ID check to allow people to view or post.

Those requirements are not yet active, as California’s Attorney General has until January 1, 2027, to finalize the specific rules.

Attorney General Rob Bonta began the initial rulemaking process in October 2025.

NetChoice first sued in November 2024, arguing that SB 976 forces Californians to hand over personal documents just to engage in lawful online speech, a demand the group says violates the First Amendment.

On September 9, 2025, a three-judge Ninth Circuit panel mostly upheld the law, finding that it was too soon to determine whether the age assurance mandate would restrict free expression before the details of that process are set.

As a result, the Attorney General can continue developing the state’s age assurance framework, while NetChoice or other organizations may bring a new legal challenge once the regulations are issued.

In its prior decision, the Ninth Circuit also removed one element of the law requiring children’s accounts to automatically hide likes and comments. Writing for the court, Judge Ryan Nelson concluded that the rule “is not the least restrictive way to advance California’s interest in protecting minors’ mental health.”

The rest of SB 976, including its age verification and content feed restrictions, remains largely intact.

The panel emphasized that without finalized regulations, it cannot yet decide whether these requirements would suppress lawful speech or create privacy risks.

NetChoice has continued to warn that the statute grants the state too much power over how people access and share information online. “NetChoice is largely disappointed in the Ninth Circuit’s ruling, and we will consider all available avenues to defend the First Amendment,” said Paul Taske, Co-Director of the NetChoice Litigation Center.

He added, “California’s law usurps the role of parents and gives the government more power over how legal speech is shared online. By mandating mass collection of sensitive data from adults and minors, it will undermine the security and privacy of families, putting them at risk of cybercrime such as identity theft.”

Keep reading