EU’s Weakened “Chat Control” Bill Still Poses Major Privacy and Surveillance Risks, Academics Warn

On November 19, the European Union stands poised to vote on one of the most consequential surveillance proposals in its digital history.

The legislation, framed as a measure to protect children online, has drawn fierce criticism from a bloc of senior European academics who argue that the proposal, even in its revised form, walks a perilous line. It invites mass surveillance under a veil of voluntarism and does so with little evidence that it will improve safety.

This latest draft of the so-called “Chat Control” law has already been softened from its original form. The Council of the European Union, facing mounting public backlash, stripped out provisions for mandatory on-device scanning of encrypted communications.

But for researchers closely following the legislation, the revised proposal is anything but a retreat.

“The proposal reinstates the option to analyze content beyond images and URLs – including text and video – and to detect newly generated CSAM,” reads the open letter, signed by 18 prominent academics from institutions such as ETH Zurich, KU Leuven, and the Max Planck Institute.

We obtained a copy of the letter for you here.

The argument, in essence, is that the Council’s latest version doesn’t eliminate the risk. It only rebrands it.

Keep reading

Pennsylvania School District Using AI-Enabled Wi-Fi To Search Students For Firearms

A Pennsylvania school district is using artificial intelligence to keep guns off its campuses. But civil liberties advocates have warned that the technology could lead to mass surveillance and violation of constitutional rights.

The Chartiers Valley School District in Allegheny County has implemented AI that harnesses the district’s Wi-Fi signals to determine whether people are carrying weapons as they enter the schools.

The technology, called Wi-AI, was developed by CurvePoint of Pittsburgh. CurvePoint grew out of AI research at Carnegie Mellon University.

According to the companyWi-AI uses “spatial intelligence” to find weapons such as guns before they enter a school.

The AI system analyzes a space and detects where potential weapons are located by interpreting “how Wi-Fi signals reflect off people and objects.”

Once a possible weapon is found, security personnel, school administrators, or others can go to the location to determine whether there is actually a threat.

It is now in use at Chartiers Valley School District high school, middle school, and primary school campuses. CurvePoint CEO Skip Smith said that in a recent test, the system found a pistol hidden in a backpack. He said the technology has a 95 percent success rate, failing only 4 percent of its searches.

Smith said the Wi-AI does not carry the same privacy concerns of other security systems because it does not rely on facial recognition or biometric data.

“We don’t know it’s you,“ Smith told The Epoch Times. ”We have no biometric information about you. Our system just sees a big bag of salt water.”

Darren Mariano, president of the Chartiers Valley Board of School Directors, said the district is excited to be the first in the country to adopt the technology.

The safety of our students and staff is always our top priority,” he said in a statement. “We’re thrilled to be the first district in the nation to implement this groundbreaking technology.”

Keep reading

Google Sued For Allegedly Using Gemini AI Tool To Track Users’ Private Communications

Google LLC is accused in a civil lawsuit of using its artificial intelligence program Gemini to collect data on users’ private communications in Gmail as well as Google’s instant messaging and video conference programs.

Until around Oct. 10, the Gemini AI assistant required the user to deliberately opt into its feature. After that date, the feature was allegedly “secretly” turned on by Google for all its users’ Gmail, Chat, and Meet accounts by default, enabling AI to track its users’ private data in those platforms “without the users’ knowledge or consent,” according to the complaint filed Nov. 11 in federal court in San Jose.

The class action lawsuit was filed in the U.S. District Court for the Northern District of California, alleging that Google is violating the California Invasion of Privacy Act, a 1967 law that prohibits surreptitious wiretapping and recording of confidential communications without the consent of all parties involved.

Although Google provides a way for users to turn off the feature, it requires users to look for it in the privacy settings to deactivate it, despite never having agreed to it in the first place, the complaint said.

The AI feature is categorized in “Google Workspace smart features” in Google settings. Once turned on, it means the user consents to the program using “Workspace content and activity” across Workspace or in other Google products.

When the feature is turned on, Gemini can “scan, read, and analyze every email (and email attachment), message, and conversation on those services,” according to the complaint.

Technology writer Ruben Circelli wrote in a PCMag article that Gemini is “downright creepy” in diving deep into his personal history, analyzing 16 years’ worth of emails after he signed up for a more advanced pro feature.

In a series of tests by Circelli, Gemini told him one of his character flaws and even knew who his first crush was in elementary school.

“This invasion of privacy wasn’t just disconcerting, though; it was unexpected,” Circelli wrote.

“Google didn’t explain what this integration would do before I signed up for its AI Pro plan, nor did it give me a way to opt out at the start.”

The Epoch Times reached out to Google for comment, but did not receive an immediate response.

“We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission,” the company has stated.

Keep reading

Bio-Digital Vaccine Passports and ‘On Patient Medical Recordkeeping’

Did you know that the only safe medical data is data that is stored inside your own body?

I didn’t know that either until Nic Hulscher recently discovered some very interesting research papers about ‘On Patient Medical Recordkeeping’ technology.

The quote below is from an article that was published in PubMed six years ago, in December 2019: “Accurate medical recordkeeping is a major challenge in many low-resource settings where well-maintained centralized databases do not exist, contributing to 1.5 million vaccine-preventable deaths annually.”

It took humans several hundred years to figure out that we are not able to maintain accurate medical records, but now we finally know.

And it’s a lucky thing that we only figured this out now, because we are finally reaching the stage where we are able to reliably record medical data: by encoding them into every living human body – in particular data about received vaccines.

There’s even a cute – no, more than cute: a heart warming acronym for this brilliant new record keeping method: OPMR.

The following quote is from an article in ‘Nature Materials’ from February 2025:

“We developed a robust on-patient medical record-keeping (OPMR) technology using a dissolvable microneedle patch (MNP) that delivers a quantum dot (QD)-based near-infrared (NIR) fluorescent dye encapsulated in poly(methyl methacrylate) (PMMA) microparticles into the skin to encode medical information. This dye, once deposited into the dermis, is invisible to the naked eye, offering patient data privacy and anonymity, but provides discrete NIR signals that can be detected using a NIR imaging system.”

Isn’t it wonderful that we have found a way to not only make it impossible to lose medical records but to keep our medical records truly private and anonymous – and especially the number of vaccine microneedle patches we got administered? Nobody will ever know – except all the folks who detect the oh so discrete Near Infrared signals with the help of the NIR imaging system. And maybe it won’t be folks much longer who detect them but some friendly AI agent. Which makes it even more sublime.

We can also stop stressing about our medical records being unavailable when China or some other country cuts the subsea cables to crash the internet:

“By depositing the dye in a predefined pattern that correlates to a specific set of information, the technology can be imaged by healthcare workers to support next-dose decisions without requiring internet connectivity or the use of centralized databases.”

See? Internet connectivity is not required. Marvelous. Life-saving ‘next-dose decisions’ won’t be blocked ever – internet or not.

Keep reading

The Disguised Return of The EU’s Private Message Scanning Plot

A major political confrontation over online privacy is approaching as European governments prepare to decide on “Chat Control 2.0,” the European Commission’s revised proposal for monitoring private digital communications.

The plan, which could be endorsed behind closed doors, has drawn urgent warnings from Dr. Patrick Breyer, a jurist and former Member of the European Parliament, who says the draft conceals sweeping new surveillance powers beneath misleading language about “risk mitigation” and “child protection.”

In a release sent to Reclaim The Net, Breyer, long a defender of digital freedom, argues that the Commission has quietly reintroduced compulsory scanning of private messages after it was previously rejected.

He describes the move as a “deceptive sleight of hand,” insisting that it transforms a supposedly voluntary framework into a system that could compel all chat, email, and messaging providers to monitor users.

“This is a political deception of the highest order,” Breyer said.

“Following loud public protests, several member states, including Germany, the Netherlands, Poland, and Austria, said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door disguised, more dangerous, and more comprehensive than ever. The public is being played for fools.”

Under the new text, providers would be obliged to take “all appropriate risk mitigation measures” to prevent abuse on their platforms. While the Commission presents this as a flexible safety requirement, Breyer insists it is a loophole that could justify forcing companies to scan every private message, including those protected by end-to-end encryption.

“The loophole renders the much-praised removal of detection orders worthless and negates their supposed voluntary nature,” he said.

He warns that it could even lead to the introduction of “client-side scanning,” where users’ devices themselves perform surveillance before messages are sent.

Unlike the current temporary exemption known as “Chat Control 1.0,” which allows voluntary scanning of photos and videos, the new draft would open the door to text and metadata analysis. Algorithms and artificial intelligence could be deployed to monitor conversations and flag “suspicious” content.

Keep reading

Fannie Mae officials ousted after sounding alarm on sharing confidential housing data

A confidant of Bill Pulte, the Trump administration’s top housing regulator, provided confidential mortgage pricing data from Fannie Mae to a principal competitor, alarming senior officials of the government-backed lending giant who warned it could expose the company to claims that it was colluding with a rival to fix mortgage rates.

Emails reviewed by The Associated Press show that Fannie Mae executives were unnerved about what one called the “very problematic” disclosure of data by Lauren Smith, the company’s head of marketing, who was acting on Pulte’s behalf.

Lauren, the information that was provided to Freddie Mac in this email is a problem,” Malloy Evans, senior vice president of Fannie Mae’s single-family mortgage division, wrote in an Oct. 11 email. “That is confidential, competitive information.”

He also copied Fannie Mae’s CEO, Priscilla Almodovar, on the email, which bore the subject line: “As Per Director Pulte’s Ask.” Evans asked Fannie Mae’s top attorney “to weigh in on what, if any, steps we need to take legally to protect ourselves now.”

While Smith still holds her position, the senior Fannie Mae officials who called her conduct into question were all forced out of their jobs late last month, along with internal ethics watchdogs who were investigating Pulte and his allies.

The dismissals rattled the housing industry and drew condemnation from Democrats. It also gave Pulte’s critics evidence to support claims that he has leveraged the nonpublic information available to him to further his own political aims.

“This is another example of Bill Pulte weaponizing his role to do Donald Trump’s bidding, instead of working to lower costs amidst a housing crisis,” said Sen. Elizabeth Warren, of Massachusetts, the ranking Democrat on the Senate Banking Committee. “His behavior raises significant questions, and he needs to be brought in front of Congress to answer them.”

The episode marks the latest example of Pulte using what is typically a low-profile position in the federal bureaucracy to enhance his own standing and gain the attention of President Trump. He’s prompted mortgage fraud investigations of prominent Democrats who are some of the president’s best known antagonists, including Sen. Adam Schiff of California, New York Attorney General Letitia James and California Rep. Eric Swalwell.

In June, he ordered Fannie Mae and Freddie Mac to prepare a proposal for the firms to accept cryptocurrency, another industry Trump has boosted, as part of the criteria for buying mortgages from banks. Last week, he persuaded Trump about the allure of a 50-year mortgage as a way to increase home buying and building – a proposal that was widely criticized because it would drastically increase the overall price of a loan.

Pulte also has focused on large home construction companies, which have drawn Trump’s ire. Pulte requested confidential Fannie Mae data and has publicly signaled that he is considering a crackdown if the companies do not increase construction volume.

“I’m looking at the Fannie Mae builder data and with the top three homebuilders we buy EASILY over $20 billion in THEIR LOANS!” he posted to X in early October.

In a brief statement, the Federal Housing Finance Agency, which Pulte leads, did not address questions from the AP, but said the agency “requires its regulated entities to carry out their operations in compliance with all applicable laws and regulations.”

Keep reading

German States Expand Police Powers to Train AI Surveillance Systems with Personal Data

Several German states are preparing to widen police powers by allowing personal data to be used in the training of surveillance technologies.

North Rhine-Westphalia and Baden-Württemberg are introducing legislative changes that would let police feed identifiable information such as names and facial images into commercial AI systems.

Both drafts permit this even when anonymization or pseudonymization is bypassed because the police consider it “impossible” or achievable only with “disproportionate effort.”

Hamburg adopted similar rules earlier this year, and its example appears to have encouraged other regions to follow. These developments together mark a clear move toward normalizing the use of personal information as fuel for surveillance algorithms.

The chain reaction began in Bavaria, where police in early 2024 tested Palantir’s surveillance software with real personal data.

The experiment drew objections from the state’s data protection authority, but still served as a model for others.

Hamburg used the same idea in January 2025 to amend its laws, granting permission to train “learning IT systems” on data from bystanders. Now Baden-Württemberg and North Rhine-Westphalia plan to adopt nearly identical language.

In North Rhine-Westphalia, police would be allowed to upload clear identifiers such as names or faces into commercial systems like Palantir’s and to refine behavioral or facial recognition programs with real, unaltered data.

Bettina Gayk, the state’s data protection officer, warned that “the proposed regulation addresses significant constitutional concerns.”

She argued that using data from people listed as victims or complainants was excessive and added that “products from commercial providers are improved with the help of state-collected and stored data,” which she found unacceptable.

The state government has embedded this expansion of surveillance powers into a broader revision of the Police Act, a change initially required by the Federal Constitutional Court.

The court had previously ruled that long-term video monitoring under the existing law violated the Basic Law.

Instead of narrowing these powers, the new draft introduces a clause allowing police to “develop, review, change or train IT products” with personal data.

This wording effectively enables continued use of Palantir’s data analysis platform while avoiding the constitutional limits the court demanded.

Across North Rhine-Westphalia, Baden-Württemberg, and Hamburg, the outcome will be similar: personal data can be used for training as soon as anonymization is judged to be disproportionately difficult, with the assessment left to police discretion.

Gayk has urged that the use of non-anonymized data be prohibited entirely, warning that the exceptions are written so broadly that “they will ultimately not lead to any restrictions in practice.”

Baden-Württemberg’s green-black coalition plans to pass its bill this week.

Keep reading

Open Table is spying on you — and ratting out your bad habits like being late, canceling to restaurants

What happens at the dining table no longer stays at the dining table.

If the city’s servers suddenly always seem to know your go-to drink order, or how you always order extra croutons on your salad – you’re not going crazy.

Reservation platform OpenTable is spying on its users and compiling personal information on guests to share with restaurants, both good and bad, from wine preferences to whether they cancel a same-day reservation.

This allows eateries to highlight things to your preference, save preferred seating or — if your AI notes reveal poor etiquette — cancel your reservation altogether, sources tell The Post. 

“It’s not just spending habits or if they like Coca-Cola or bottled water. Now, we’re getting a taste of what a diner’s behavior at a restaurant is like: If they’re a late canceler, if they leave reviews a lot,” Shawn Hunter, a general manager for Sojourn Social on the Upper East Side told The Post of the feature he first noticed two weeks ago.

Keep reading

Dem-run city hires inspectors to snoop in residents’ garbage cans to make sure they’re recycling properly

Residents in a California city can expect to see trash inspectors lifting their garbage cans in the early morning hours as the city continues to crack down on recycling. 

Officials are sending teams of Compliance Officers or ‘lid lifters’ to walk through neighborhoods before trash collection and monitor whether residents are properly sorting their trash and recycling. 

The initiative in San Diego was launched following the passage of a law in the California State Senate (SB 1383), which established a new organic waste recycling program. 

The city will not issue citations to those who violate the recycling rules, but instead will place an ‘oops’ tag on the bin, notifying the owner that they made a mistake. 

Some bins may have a ‘do not collect’ sticker on them, which requires homeowners to sift through their trash and call the city for a new pickup.  

The lid lifters won’t be sifting through garbage cans and are only tasked with inspecting what they can see after looking inside the bins. 

City Waste Reduction Program Manager Alexander Galasso told local ABC affiliate, KGTV: ‘Waste doesn’t end when you come to the trash can.’

‘There is a life after waste and we want to make sure that these are sorted correctly, because not only does it impact our staff and trucks, but it impacts what goes into our landfill.’

Keep reading

Time to Pay Attention: Europe Just Eviscerated Monetary Privacy, and It’s Coming Here Next

By 2027, the European Union will have completed the most invasive overhaul of its financial system in modern history. Under Regulation (EU) 2024/1624, cash transactions above €10,000 will be illegal—no matter if it’s a private sale, a used car, or a family heirloom. 

“Persons trading in goods or providing services may accept or make a payment in cash only up to an amount of EUR 10 000 or the equivalent in national or foreign currency, whether the transaction is carried out in a single operation or in several linked operations which appear to be linked.” — Regulation (EU) 2024/1624, Article 80, paragraph 1

Simultaneously, the Markets in Crypto-Assets Regulation (MiCA) forces all crypto service providers to implement full-blown surveillance via mandatory identity verification and reporting. An anonymous Bitcoin transfer? That window is closing. And rounding out the trifecta is the European Central Bank’s digital euro, which promises privacy—just not too much of it.

This isn’t a proposal. It’s happening. And if you think it’s just about catching criminals, you haven’t been paying attention.

The justification, as always, is safety. European officials cite €700 billion in annual money laundering as the reason for the crackdown, framing the new rules as a bold stand against crime and corruption. But what they’re building isn’t a net—it’s a cage. These laws don’t distinguish between a cartel kingpin and a retiree who prefers cash. They treat every transaction like a threat, every citizen like a suspect, and every private interaction as a problem to be solved by surveillance.

Keep reading