EU Push to Make Message Scanning Permanent Despite Evidence of Failure and Privacy Risks

The European Union has a habit of turning its worst temporary ideas into permanent fixtures. This time it is “Chat Control 1.0,” the 2021 law that lets tech companies scan everyone’s private messages in the name of child protection.

It was supposed to be a stopgap measure, a temporary derogation of privacy rights until proper evidence came in.

Now, if you’ve been following our previous reporting, you’ll know the Council wants to make it permanent, even though the Commission’s own 2025 evaluation report admits it has no evidence the thing actually works.

We obtained a copy of the report for you here.

The report doesn’t even hide the chaos. It confesses to missing data, unproven results, and error rates that would embarrass a basic software experiment.

Yet its conclusion jumps from “available data are insufficient” to “there are no indications that the derogation is not proportionate.” That is bureaucratic logic at its blandest.

The Commission’s Section 3 conclusion includes the sentence “the available data are insufficient to provide a definitive answer” on proportionality, followed immediately by “there are no indications that the derogation is not proportionate.”

In plain language, they can’t prove the policy isn’t violating rights, but since they can’t prove that it is, they will treat it as acceptable.

The same report admits it can’t even connect the dots between all that scanning and any convictions. Section 2.2.3 states: “It is not currently possible…to establish a clear link between these convictions and the reports submitted by providers.” Germany and Spain didn’t provide usable figures.

Keep reading

Congress Goes Parental on Social Media and Your Privacy

Washington has finally found a monster big enough for bipartisan unity: the attention economy. In a moment of rare cross-aisle cooperation, lawmakers have introduced two censorship-heavy bills and a tax scheme under the banner of the UnAnxious Generation package.

The name, borrowed from Jonathan Haidt’s pop-psychology hit The Anxious Generation, reveals the obvious pitch: Congress will save America’s children from Silicon Valley through online regulation and speech controls.

Representative Jake Auchincloss of Massachusetts, who has built a career out of publicly scolding tech companies, says he’s going “directly at their jugular.”

The plan: tie legal immunity to content “moderation,” tax the ad money, and make sure kids can’t get near an app without producing an “Age Signal.” If that sounds like a euphemism for surveillance, that’s because it is.

The first bill, the Deepfake Liability Act, revises Section 230, the sacred shield that lets platforms host your political rants, memes, and conspiracy reels without getting sued for them.

Under the new proposal, that immunity becomes conditional on a vague “duty of care” to prevent deepfake porn, cyberstalking, and “digital forgeries.”

TIME’s report doesn’t define that last term, which could be a problem since it sounds like anything from fake celebrity videos to an unflattering AI meme of your senator. If “digital forgery” turns out to include parody or satire, every political cartoonist might suddenly need a lawyer on speed dial.

Auchincloss insists the goal is accountability, not censorship. “If a company knows it’ll be liable for deepfake porn, cyberstalking, or AI-created content, that becomes a board-level problem,” he says. In other words, a law designed to make executives sweat.

But with AI-generated content specifically excluded from Section 230 protections, the bill effectively redefines the internet’s liability protections.

Keep reading

Smith Prosecutors Admitted They Risked Lawsuits To Illegally Grab Senators’ Private Records

In a move that has raised eyebrows among conservatives, special counsel Jack Smith and his team have been revealed to have pursued the phone records of Republican congress members, despite being cautioned about the potential for “litigation risk,” according to internal emails.

As reported by Western Journal, the emails, which were made public on Tuesday by Senators Chuck Grassley of Iowa and Ron Johnson of Wisconsin, expose the deliberations of prosecutors as they decided which congress members to target with subpoenas. These actions were part of the “Arctic Frost” investigation, which later evolved into Smith’s 2020 election case against then-President Donald Trump.

John Keller, the former head of the Justice Department Public Integrity Section, had warned the prosecutors in a May 2023 email, stating, “As you are aware, there is some litigation risk regarding whether compelled disclosure of toll records of a Member’s legislative calls violates the Speech or Debate Clause in the D.C. Circuit.” Despite this caution, Keller gave the green light for the subpoenas.

Keep reading

EU Parliament Votes for Mandatory Digital ID and Age Verification, Threatening Online Privacy

The European Parliament has voted to push the European Union closer to a mandatory digital identification system for online activity, approving a non-binding resolution that endorses EU-wide age verification rules for social media, video platforms, and AI chatbots.

Though presented as a child protection measure, the text strongly promotes the infrastructure for universal digital ID, including the planned EU Digital Identity Wallet and an age verification app being developed by the European Commission.

Under the proposal, every user would have to re-identify themselves at least once every three months to continue using major platforms. Children under 13 would be banned entirely, and teenagers between 13 and 16 would require parental approval to participate online.

Keep reading

EU Council Approves New “Chat Control” Mandate Pushing Mass Surveillance

European governments have taken another step toward reviving the EU’s controversial Chat Control agenda, approving a new negotiating mandate for the Child Sexual Abuse Regulation in a closed session of the Council of the European Union on November 26.

The measure, presented as a tool for child protection, is once again drawing heavy criticism for its surveillance implications and the way it reshapes private digital communication in Europe.

Unlike earlier drafts, this version drops the explicit obligation for companies to scan all private messages but quietly introduces what opponents describe as an indirect system of pressure.

It rewards or penalizes online services depending on whether they agree to carry out “voluntary” scanning, effectively making intrusive monitoring a business expectation rather than a legal requirement.

Former MEP Patrick Breyer, a long-standing defender of digital freedom and one of the most vocal opponents of the plan, said the deal “paves the way for a permanent infrastructure of mass surveillance.”

According to him, the Council’s text replaces legal compulsion with financial and regulatory incentives that push major US technology firms toward indiscriminate scanning.

He warned that the framework also brings “anonymity-breaking age checks” that will turn ordinary online use into an exercise in identity verification.

The new proposal, brokered largely through Danish mediation, comes months after the original “Chat Control 1.0” regulation appeared to have been shelved following widespread backlash.

It reinstates many of the same principles, requiring providers to assess their potential “risk” for child abuse content and to apply “mitigation measures” approved by authorities. In practice, that could mean pressure to install scanning tools that probe both encrypted and unencrypted communications.

Czech MEP Markéta Gregorová called the Council’s position “a disappointment…Chat Control…opens the way to blanket scanning of our messages.”

Keep reading

Chat Control 2.0: EU Moves Toward Ending Private Communication

Between the coffee breaks and the diplomatic niceties of Brussels bureaucracy, a quiet dystopian revolution might be taking place. On November 26, a roomful of unelected officials could nod through one of the most consequential surveillance laws in modern European history, without ever having to face the public.

The plan, politely titled EU Moves to End Private Messaging with Chat Control 2.0, sits on the agenda of the Committee of Permanent Representatives, or Coreper, a club of national ambassadors whose job is to prepare legislation for the European Council. This Wednesday, they may “prepare” it straight into existence.

According to MEP Martin Sonneborn, Coreper’s diplomats could be ready to endorse the European Commission’s digital surveillance project in secret.

It was already due for approval a week earlier before mysteriously vanishing from the schedule. Now it’s back, with privacy advocates watching like hawks who suspect the farmer’s got a shotgun.

The Commission calls Chat Control 2.0 a child-protection measure. The branding suggests moral urgency; the text suggests mass surveillance. The proposal would let governments compel messaging services such as WhatsApp or Signal to scan users’ messages before they’re sent.

Officials insist that the newest version removes mandatory scanning, which is a bit like saying a loaded gun is safer because you haven’t pulled the trigger yet.

Keep reading

GrapheneOS Quits France, Citing Unsafe Climate for Open Source Tech

GrapheneOS, the privacy-focused Android operating system, has ended all operations in France, saying the country is no longer a safe place for open source privacy projects.

Although French users will still be able to install and use the software, the project is moving every related service, including its website, forums, and discussion servers, outside French territory.

Until now, GrapheneOS used OVH Bearharnois, a hosting provider based in France, for some of its infrastructure. That setup is being dismantled.

The Mastodon, Discourse, and Matrix servers will operate from Toronto on a mix of local and shared systems. These changes are designed to remove any dependency on French service providers.

The developers said their systems do not collect or retain confidential user data and that no critical security infrastructure was ever stored in France. Because of that, the migration will not affect features such as update verification, digital signature checks, or downgrade protection.

The decision also applies to travel and work policies. Team members have been told not to enter France, citing both personal safety concerns and the government’s endorsement of the European Union’s Chat Control proposal.

That measure would allow authorities to scan private communications for illegal material, something privacy developers see as incompatible with secure digital design.

Keep reading

EU’s Weakened “Chat Control” Bill Still Poses Major Privacy and Surveillance Risks, Academics Warn

On November 19, the European Union stands poised to vote on one of the most consequential surveillance proposals in its digital history.

The legislation, framed as a measure to protect children online, has drawn fierce criticism from a bloc of senior European academics who argue that the proposal, even in its revised form, walks a perilous line. It invites mass surveillance under a veil of voluntarism and does so with little evidence that it will improve safety.

This latest draft of the so-called “Chat Control” law has already been softened from its original form. The Council of the European Union, facing mounting public backlash, stripped out provisions for mandatory on-device scanning of encrypted communications.

But for researchers closely following the legislation, the revised proposal is anything but a retreat.

“The proposal reinstates the option to analyze content beyond images and URLs – including text and video – and to detect newly generated CSAM,” reads the open letter, signed by 18 prominent academics from institutions such as ETH Zurich, KU Leuven, and the Max Planck Institute.

We obtained a copy of the letter for you here.

The argument, in essence, is that the Council’s latest version doesn’t eliminate the risk. It only rebrands it.

Keep reading

Pennsylvania School District Using AI-Enabled Wi-Fi To Search Students For Firearms

A Pennsylvania school district is using artificial intelligence to keep guns off its campuses. But civil liberties advocates have warned that the technology could lead to mass surveillance and violation of constitutional rights.

The Chartiers Valley School District in Allegheny County has implemented AI that harnesses the district’s Wi-Fi signals to determine whether people are carrying weapons as they enter the schools.

The technology, called Wi-AI, was developed by CurvePoint of Pittsburgh. CurvePoint grew out of AI research at Carnegie Mellon University.

According to the companyWi-AI uses “spatial intelligence” to find weapons such as guns before they enter a school.

The AI system analyzes a space and detects where potential weapons are located by interpreting “how Wi-Fi signals reflect off people and objects.”

Once a possible weapon is found, security personnel, school administrators, or others can go to the location to determine whether there is actually a threat.

It is now in use at Chartiers Valley School District high school, middle school, and primary school campuses. CurvePoint CEO Skip Smith said that in a recent test, the system found a pistol hidden in a backpack. He said the technology has a 95 percent success rate, failing only 4 percent of its searches.

Smith said the Wi-AI does not carry the same privacy concerns of other security systems because it does not rely on facial recognition or biometric data.

“We don’t know it’s you,“ Smith told The Epoch Times. ”We have no biometric information about you. Our system just sees a big bag of salt water.”

Darren Mariano, president of the Chartiers Valley Board of School Directors, said the district is excited to be the first in the country to adopt the technology.

The safety of our students and staff is always our top priority,” he said in a statement. “We’re thrilled to be the first district in the nation to implement this groundbreaking technology.”

Keep reading

Google Sued For Allegedly Using Gemini AI Tool To Track Users’ Private Communications

Google LLC is accused in a civil lawsuit of using its artificial intelligence program Gemini to collect data on users’ private communications in Gmail as well as Google’s instant messaging and video conference programs.

Until around Oct. 10, the Gemini AI assistant required the user to deliberately opt into its feature. After that date, the feature was allegedly “secretly” turned on by Google for all its users’ Gmail, Chat, and Meet accounts by default, enabling AI to track its users’ private data in those platforms “without the users’ knowledge or consent,” according to the complaint filed Nov. 11 in federal court in San Jose.

The class action lawsuit was filed in the U.S. District Court for the Northern District of California, alleging that Google is violating the California Invasion of Privacy Act, a 1967 law that prohibits surreptitious wiretapping and recording of confidential communications without the consent of all parties involved.

Although Google provides a way for users to turn off the feature, it requires users to look for it in the privacy settings to deactivate it, despite never having agreed to it in the first place, the complaint said.

The AI feature is categorized in “Google Workspace smart features” in Google settings. Once turned on, it means the user consents to the program using “Workspace content and activity” across Workspace or in other Google products.

When the feature is turned on, Gemini can “scan, read, and analyze every email (and email attachment), message, and conversation on those services,” according to the complaint.

Technology writer Ruben Circelli wrote in a PCMag article that Gemini is “downright creepy” in diving deep into his personal history, analyzing 16 years’ worth of emails after he signed up for a more advanced pro feature.

In a series of tests by Circelli, Gemini told him one of his character flaws and even knew who his first crush was in elementary school.

“This invasion of privacy wasn’t just disconcerting, though; it was unexpected,” Circelli wrote.

“Google didn’t explain what this integration would do before I signed up for its AI Pro plan, nor did it give me a way to opt out at the start.”

The Epoch Times reached out to Google for comment, but did not receive an immediate response.

“We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission,” the company has stated.

Keep reading