EU Law Could Extend Scanning of Private Messages Until 2027

The European Parliament is considering another extension of Chat Control 1.0, the “temporary” exemption that allows communications providers to scan private messages (under the premise of preventing child abuse) despite the protections of the EU’s ePrivacy Directive.

draft report presented by rapporteur Birgit Sippel (S&D) would prolong the derogation until April 3, 2027.

At first glance, the proposal appears to roll back some of the most controversial elements of Chat Control. Text message scanning and automated analysis of previously unknown images would be explicitly excluded. Supporters have framed this as a narrowing of scope.

However, the core mechanism of Chat Control remains untouched.

The draft continues to permit mass hash scanning of private communications for so-called “known” material.

According to former MEP and digital rights activist Patrick Breyer, approximately 99 percent of all reports generated under Chat Control 1.0 originate from hash-based detection.

Almost all of those reports come from a single company, Meta, which already limits its scanning to known material only. Under the new proposal, Meta’s practices would remain fully authorized.

As a result, the draft would not meaningfully reduce the volume, scope, or nature of surveillance. The machinery keeps running, with a few of its most visibly controversial attachments removed.

Hash scanning is often portrayed as precise and reliable. The evidence points in the opposite direction.

First, the technology is incapable of understanding context or intent. Hash databases are largely built using US legal definitions of illegality, which do not map cleanly onto the criminal law of EU Member States.

The German Federal Criminal Police Office (BKA) reports that close to half of all chat control reports are criminally irrelevant.

Each false positive still requires assessment, documentation, and follow-up. Investigators are forced to triage noise rather than pursue complex cases involving production, coercion, and organized abuse.

The strategic weakness is compounded by a simple reality. Offenders adapt. As more services adopt end-to-end encryption, abusers migrate accordingly. Since 2022, the number of chat-based reports sent to police has fallen by roughly 50 percent, not because abuse has declined, but because scanning has become easier to evade.

“Both children and adults deserve a paradigm shift in online child protection, not token measures,” Breyer said in a statement to Reclaim The Net.

“Whether looking for ‘known’ or ‘unknown’ content, the principle remains: the post office cannot simply open and scan every letter at random. Searching only for known images fails to stop ongoing abuse or rescue victims.”

Keep reading

Microsoft confirms it will give the FBI your Windows PC data encryption key if asked — you can thank Windows 11’s forced online accounts for that

Microsoft has confirmed in a statement to Forbes that the company will provide the FBI access to BitLocker encryption keys if a valid legal order is requested. These keys enable the ability to decrypt and access the data on a computer running Windows, giving law enforcement the means to break into a device and access its data.

The news comes as Forbes reports that Microsoft gave the FBI the BitLocker encryption keys to access a device in Guam that law enforcement believed to have “evidence that would help prove individuals handling the island’s Covid unemployment assistance program were part of a plot to steal funds” in early 2025.

This was possible because the device in question had its BitLocker encryption key saved in the cloud. By default, Windows 11 forces the use of a Microsoft Account, and the OS will automatically tie your BitLocker encryption key to your online account so that users can easily recover their data in scenarios where they might get locked out. This can be disabled, letting you choose where to save them locally, but the default behavior is to store the key in Microsoft’s cloud when setting up a PC with a Microsoft Account.

Keep reading

Only ‘braindead’ believe WhatsApp is secure – Durov

Pavel Durov, the Russian tech entrepreneur who created the Telegram messenger app,  has claimed there is no doubt WhatsApp lacks any meaningful privacy, after its parent company was hit with a new lawsuit.

In a major class-action lawsuit filed against Meta Platforms, Inc. in a US district court last week, an international group of plaintiffs from countries including Australia, Brazil and India has accused the company of making false claims about the privacy of its WhatsApp service.

“You’d have to be braindead to believe WhatsApp is secure in 2026,” Durov posted on X on Monday, mocking suggestions that Meta cannot read users’ messages. “When we analyzed how WhatsApp implemented its ‘encryption’, we found multiple attack vectors.”

The lawsuit challenges the cornerstone of WhatsApp’s privacy promise: its default end-to-end encryption, which uses the Signal protocol. The plaintiffs allege that, contrary to its in-app claim that “only people in this chat can read, listen to, or share” messages, Meta and WhatsApp “store, analyze, and can access virtually all of WhatsApp users’ purportedly ‘private’ communications.” The complaint cites unspecified whistleblowers as the source of this information.

Keep reading

UK Orders Ofcom to Explore Encryption Backdoors

By now, we’ve all heard the familiar refrain: “It’s for your safety.” It’s the soothing mantra of every government official who’s ever wanted a peek behind your digital curtains.

This week, with a move that would make East Germany blush, the UK government officially confirmed its intention to hand Ofcom  (yes, that Ofcom, the regulator that once investigated whether Love Island was too spicy) the keys to your private messages.

The country, already experiencing rapidly declining civil liberties, is now planning to scan encrypted chats for “bad stuff.”

Now, for those unfamiliar, Ofcom is the UK’s communications regulator that has recently been given censorship pressure powers for online speech.

It’s become the government’s Swiss Army knife for everything from internet censorship to now, apparently, full-blown surveillance.

Under the Online Safety Act, Ofcom has been handed something called Section 121, which sounds like a tax loophole but is actually a legal crowbar for prying open encrypted messages.

It allows the regulator to compel any online service that lets people talk to each other, Facebook Messenger, Signal, iMessage, etc to install “accredited technology” to scan for terrorism or child abuse material.

Keep reading

The Encryption Double Life of Canberra

The Australian government is quietly relying on encrypted messaging to conduct sensitive business, even as it hardens its stance against public use of secure communications.

While the public faces increasing surveillance and legal pressure for using end-to-end encryption, senior officials are steering policy conversations into private digital spaces, shielding them from scrutiny under Freedom of Information (FOI) laws.

Since midyear, ministerial staff have been advising lobbyists, peak bodies and industry groups to avoid email altogether and submit reform proposals through the encrypted messaging app Signal.

Some of these exchanges have been requested using disappearing messages, ensuring there is no record retained on government systems.

Several sources confirmed to the Saturday Paper that this guidance is now common across a number of policy areas.

In addition to Signal, stakeholders have been encouraged to use phone calls for detailed conversations and limit the content of any written communications.

In at least one case, after a formal meeting, the follow-up came in the form of a verbal summary rather than the usual written recap sent by email.

While the government has maintained formal channels for official submissions, a secondary mode of policymaking is taking shape.

This mode operates out of reach of archiving protocols and public oversight.

One participant in this informal process described it as an effort to protect the early phases of policy development from outside scrutiny, arguing that “fluid thoughts and ideas” should be exempt from public record.

Yet the effect of these practices is to create a shadow layer of government consultation that leaves no trace and falls outside the accountability mechanisms intended to safeguard democratic participation.

Keep reading

Chat Control 2.0: EU Moves Toward Ending Private Communication

Between the coffee breaks and the diplomatic niceties of Brussels bureaucracy, a quiet dystopian revolution might be taking place. On November 26, a roomful of unelected officials could nod through one of the most consequential surveillance laws in modern European history, without ever having to face the public.

The plan, politely titled EU Moves to End Private Messaging with Chat Control 2.0, sits on the agenda of the Committee of Permanent Representatives, or Coreper, a club of national ambassadors whose job is to prepare legislation for the European Council. This Wednesday, they may “prepare” it straight into existence.

According to MEP Martin Sonneborn, Coreper’s diplomats could be ready to endorse the European Commission’s digital surveillance project in secret.

It was already due for approval a week earlier before mysteriously vanishing from the schedule. Now it’s back, with privacy advocates watching like hawks who suspect the farmer’s got a shotgun.

The Commission calls Chat Control 2.0 a child-protection measure. The branding suggests moral urgency; the text suggests mass surveillance. The proposal would let governments compel messaging services such as WhatsApp or Signal to scan users’ messages before they’re sent.

Officials insist that the newest version removes mandatory scanning, which is a bit like saying a loaded gun is safer because you haven’t pulled the trigger yet.

Keep reading

The Disguised Return of The EU’s Private Message Scanning Plot

A major political confrontation over online privacy is approaching as European governments prepare to decide on “Chat Control 2.0,” the European Commission’s revised proposal for monitoring private digital communications.

The plan, which could be endorsed behind closed doors, has drawn urgent warnings from Dr. Patrick Breyer, a jurist and former Member of the European Parliament, who says the draft conceals sweeping new surveillance powers beneath misleading language about “risk mitigation” and “child protection.”

In a release sent to Reclaim The Net, Breyer, long a defender of digital freedom, argues that the Commission has quietly reintroduced compulsory scanning of private messages after it was previously rejected.

He describes the move as a “deceptive sleight of hand,” insisting that it transforms a supposedly voluntary framework into a system that could compel all chat, email, and messaging providers to monitor users.

“This is a political deception of the highest order,” Breyer said.

“Following loud public protests, several member states, including Germany, the Netherlands, Poland, and Austria, said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door disguised, more dangerous, and more comprehensive than ever. The public is being played for fools.”

Under the new text, providers would be obliged to take “all appropriate risk mitigation measures” to prevent abuse on their platforms. While the Commission presents this as a flexible safety requirement, Breyer insists it is a loophole that could justify forcing companies to scan every private message, including those protected by end-to-end encryption.

“The loophole renders the much-praised removal of detection orders worthless and negates their supposed voluntary nature,” he said.

He warns that it could even lead to the introduction of “client-side scanning,” where users’ devices themselves perform surveillance before messages are sent.

Unlike the current temporary exemption known as “Chat Control 1.0,” which allows voluntary scanning of photos and videos, the new draft would open the door to text and metadata analysis. Algorithms and artificial intelligence could be deployed to monitor conversations and flag “suspicious” content.

Keep reading

UK Crime Agency Backs “Upload Prevention” Plan to Scan Encrypted Messages

Britain’s Internet Watch Foundation (IWF) has decided that privacy needs a chaperone.

The group has launched a campaign urging tech companies to install client-side scanning in encrypted apps, a proposal that would make every private message pass through a local checkpoint before being sent.

The IWF calls it an “upload prevention” system. Critics might call it the end of private communication disguised as a safety feature.

Under the plan, every file or image shared on a messaging app would be checked for sexual abuse material (CSAM).

The database would be maintained by what the IWF describes as a “trusted body.” If a match is found, the upload is blocked before encryption can hide it. The pitch is that nothing leaves the device unless it’s cleared, but that is like claiming a home search is fine as long as the police do not take anything.

As has been shown in Germany, this technology would not only catch criminals. Hashing errors and false positives happen, which means lawful material could be stopped before it ever leaves a phone.

And once the scanning infrastructure is built, there is nothing stopping it from being redirected toward new categories of “harmful” or “illegal” content. The precedent would be set: your phone would no longer be a private space.

Although the IWF is running this show, it has plenty of political muscle cheering it on.

Safeguarding Minister Jess Phillips praised the IWF campaign, saying: “It is clear that the British public want greater protections for children online and we are working with technology companies so more can be done to keep children safer. The design choices of platforms cannot be an excuse for failing to respond to the most horrific crimes…If companies don’t comply with the Online Safety Act they will face enforcement from the regulator. Through our action we now have an opportunity to make the online world safer for children, and I urge all technology companies to invest in safeguards so that children’s safety comes first.”

That endorsement matters. It signals that the government is ready to use the already-controversial Online Safety Act to pressure companies into surveillance compliance.

Ofcom, armed with new regulatory powers under that Act, can make “voluntary” ideas mandatory with little more than a memo.

The UK’s approach to online regulation is becoming increasingly invasive. The government recently tried to compel Apple to install a back door into its encrypted iCloud backups under the Investigatory Powers Act. Apple refused and instead pulled its most secure backup option from British users, leaving the country with weaker privacy than nearly anywhere else in the developed world.

Keep reading

The EU’s Two-Tier Encryption Vision Is Digital Feudalism

Sam Altman, CEO of OpenAI, recently showed a moment of humanity in a tech world that often promises too much, too fast. He urged users not to share anything with ChatGPT that they wouldn’t want a human to see. The Department of Homeland Security in the United States has already started to take notice.

His caution strikes at a more profound truth that underpins our entire digital world. In a realm where we can no longer be certain whether we’re dealing with a personit is clear that software is often the agent communicating, not people. This growing uncertainty is more than just a technical challenge. It strikes at the very foundation of trust that holds society together. 

This should cause us to reflect not just on AI, but on something even more fundamental, far older, quieter and more critical in the digital realm: encryption.

In a world increasingly shaped by algorithms and autonomous systems, trust is more important than ever. 

Encryption is our foundation

Encryption isn’t just a technical layer; it is the foundation of our digital lives. It protects everything from private conversations to global financial systems, authenticates identity and enables trust to scale across borders and institutions.

Crucially, it’s not something that can be recreated through regulation or substituted with policy. When trust breaks down, when institutions fail or power is misused, encryption is what remains. It’s the safety net that ensures our most private information stays protected, even in the absence of trust.

A cryptographic system isn’t like a house with doors and windows. It is a mathematical contract; precise, strict and meant to be unbreakable. Here, a “backdoor” is not just a secret entry but a flaw embedded in the logic of the contract, and one flaw is all it takes to destroy the entire agreement. Any weakness introduced for one purpose could become an opening for everyone, from cybercriminals to authoritarian regimes. Built entirely on trust through strong, unbreakable code, the entire structure begins to collapse once that trust is broken. And right now, that trust is under threat. 

Keep reading

Signal Threatens to Exit Europe Over EU Push for Messaging App Scanning Law

Signal is warning it will walk away from Europe rather than participate in what privacy defenders describe as one of the most dangerous surveillance schemes ever proposed by the EU.

Lawmakers in Brussels are pressing for a law that would compel messaging apps to break their own security by installing scanning systems inside private communications.

Meredith Whittaker, president of Signal, said the company will never compromise on encryption to satisfy government demands.

“Unfortunately, if we were given the choice of either undermining the integrity of our encryption and our data protection guarantees or leaving Europe, we would make the decision to leave the market,” she told the dpa news agency.

The draft legislation is framed as a child protection measure, but would require all major messengers, from WhatsApp to Signal to Telegram, to monitor every message before it is encrypted.

This would eliminate true private communication in Europe and create tools that could be abused for mass surveillance.

Privacy advocates have repeatedly warned that once a backdoor exists, there is no way to restrict who uses it or for what purpose.

Whittaker was clear about the stakes. “It guarantees the privacy of millions upon millions of people around the world, often in life-threatening situations as well.”

She added that Signal refuses to enable chat control because “it’s unfortunate that politicians continue to fall prey to a kind of magical thinking that assumes you can create a backdoor that only the good have access to.”

Any such system, she argued, would make everyone less safe.

The European Parliament already rejected the scanning mandate with a strong cross-party majority, recognizing the threat it poses to basic rights.

But within the Council of Member States, the push for chat control remains alive. Denmark’s presidency could renew momentum for the proposal, even though countries like Germany have so far resisted.

Germany’s position is pivotal. The coalition agreement of its current government promises to defend “the confidentiality of private communications and anonymity online.”

Yet the inclusion of the phrase “in principle” raises alarms, suggesting exceptions could open the door to backdoors in messaging apps.

If Germany wavers, Europe could be on the verge of losing secure communication altogether.

Keep reading