EU Revives Plan to Ban Private Messaging

The European Union is still wrestling with a controversial plan that would turn private messaging services into surveillance tools. For over three years, talks have stalled over whether providers should be forced to scan every user’s messages for possible illegal material and forward anything suspicious to law enforcement.

The European Commission is still pushing for a universal scanning requirement.

In contrast, the European Parliament insists any checks should apply only to unencrypted messages from people already under suspicion. Attempts to strike a deal have repeatedly fallen apart, with Poland the latest presidency to walk away without an agreement.

July brought a change in leadership of the Council of the EU, with Denmark stepping in and putting chat scanning back at the top of the legislative pile. Copenhagen wants this handled as a priority and wasted no time tabling a new draft on its very first day in charge.

Leaked records from a closed door July meeting show the Danish text closely tracks earlier proposals from Belgium and Hungary, with no concessions for encrypted conversations. A softer version from Poland, which would have made scanning voluntary and left encrypted chats alone, has been dropped entirely.

Out of 27 EU countries, 20 spoke during the July debate, each lodging what officials call a “comprehensive audit reservation.” Germany summed up the atmosphere by noting, “the familiar mood was clear.”

Italy, Spain, and Hungary have been in favor of mandatory chat scanning from the start. France could tip the balance since blocking the plan requires four countries representing at least 35 percent of the EU’s population. Paris has moved from tentative support to saying it could “basically support the proposal.”

Others remain cautious or opposed. Belgium, despite earlier enthusiasm, admits encrypted scanning is “a difficult topic nationally.” Estonia reports a “national conflict between security authorities and data protection officers regarding encryption and client-side scanning.” Austria is bound by a parliamentary vote against mandatory scanning or undermining encryption, a stance shared by the Netherlands. Luxembourg and Slovenia say they are still “not yet convinced.”

Keep reading

Denmark Is Fighting AI by Giving Citizens Copyright to Their Own Faces

Your image, your voice, and your essence as a human being could be gobbled up and regurgitated by AI. The clock is ticking on when you’re control over your image and representation is completely out of your hands.

To tip the scales back in favor of those who wish to remain in firm control of their image, Denmark has put forth a proposal that would give every one of its citizens the legal ground to go after someone who uses their image without their consent.

This specifically covers deepfakes, those videos of a person’s face or body that have been digitally altered so they appear to be someone else.

The Scandinavian nation has put forth a proposal to amend its copyright laws so that everyone owns the rights to their own face, their own voice, and their body. Current laws aren’t quite up to snuff when it comes to protecting people from having their likenesses twisted and contorted.

Keep reading

Supreme Court Urged to Halt Mississippi’s Online Digital ID Law Over Free Speech and Privacy Concerns

NetChoice has filed an emergency application with the US Supreme Court to halt the enforcement of Mississippi’s online age verification digital ID law, House Bill 1126, after the Fifth Circuit stayed a preliminary injunction without explanation. The group is urging the Court to reinstate the district court’s ruling and protect First Amendment rights, which it argues are under immediate threat.

The Mississippi law compels every person, regardless of age, to verify their identity before creating accounts on social media platforms, and requires minors to obtain explicit parental consent.

NetChoice argues that this framework “unconstitutionally imposes content-based parental-consent, age-verification, and monitoring-and-censorship requirements for vague categories of speech on social media websites.”

The emergency filing warns of far-reaching consequences, asserting that “the Act will prevent access to that expression for some users entirely—including those unwilling or unable to verify their age and minors who cannot secure parental consent.”

We obtained a copy of the filing for you here.

Adults would also be subject to this regime, required to share private information in order to access constitutionally protected online spaces.

According to the brief, “the Act would require adults and minors to provide personally identifying information to access all manner of fully protected speech.”

NetChoice compares this level of state control to a dystopian system where “stationing government-mandated clerks at every bookstore and theater to check identification before citizens can access books, movies, or even join conversations” would be the norm.

The brief continues, “This Act thus presents far different issues from pornography laws… it ‘directly targets’ a staggering amount of fully protected speech.”

Keep reading

Drones, cameras, AI: University of Illinois real time crime center raises privacy concerns

Thousands of cameras. A fleet of drones. Gun shot detection devices. Stationary and vehicle-mounted automatic license plate readers.

A major metropolitan city? No, the University of Illinois Urbana-Champaign.

The University of Illinois Urbana-Champaign’s Real-Time Information Center furnishes the institution’s Division of Public Safety with a number of technologically sophisticated tools that have some privacy experts alarmed.

The drones, gunshot detection devices, automatic license plate readers, and campus-wide system of roughly 3,000 security cameras are among the tools currently utilized at the campus, which enrolls about 59,000 students.

Social media monitoring programs and “AI-driven video analytics software” are also among the technologies being evaluated for possible future implementation, according to a document sent by Urbana Police Chief Larry Boone.

He sent it to city officials as they deliberate a proposed city ordinance to establish stricter approval, oversight, and transparency requirements for Urbana’s own acquisition and use of the kinds of surveillance tools being used by the university’s Real-Time Information Center.

According to the document, the Real-Time Information Center provides a wide array of services designed to enhance public safety, streamline operations, and support law enforcement agencies.

Keep reading

When Smart Meters Turn Into Spy Tools

California’s robust privacy protections are facing a critical test as the Electronic Frontier Foundation (EFF) and community advocates press forward with a lawsuit to dismantle what they describe as an illegal and biased surveillance operation run by Sacramento’s public electric utility.

In a legal filing submitted last week, the EFF laid out evidence that the Sacramento Municipal Utility District (SMUD), which serves more than 650,000 customers, has spent over a decade monitoring detailed home electricity data and funneling it to police without a warrant. The organization calls this an unconstitutional “dragnet surveillance” program that unlawfully invades household privacy on a massive scale.

We obtained a copy of the filing for you here.

“This case is about Sacramento Municipal Utility District’s…dragnet surveillance of SMUD customers’ homes using sensitive and confidential energy usage information,” the brief begins. “The decade-long surveillance violates the California Constitution and a state privacy statute.”

SMUD’s so-called “smart meters,” installed in nearly every home it serves, transmit power usage in 15-minute intervals to the utility multiple times per day. This data, the lawsuit argues, offers a detailed portrait of home life, including sleep patterns, occupancy, and even personal routines. “SMUD analysts can, in effect, use the data to digitally peer into a person’s home,” the brief explains.

EFF alleges that SMUD has routinely handed over customer information to local police departments, including names, addresses, and usage history, without any individualized suspicion or judicial oversight. In many cases, these disclosures were based solely on arbitrary consumption thresholds. “SMUD has turned over…the names, addresses, and electrical consumption information of more than 33,000 customers through a zip code list,” the brief states.

Keep reading

The Wearables Trap: How the Government Plans to Monitor, Score, and Control You

Bodily autonomy—the right to privacy and integrity over our own bodies—is rapidly vanishing. The debate now extends beyond forced vaccinations or invasive searches to include biometric surveillance, wearable tracking, and predictive health profiling.

We are entering a new age of algorithmic, authoritarian control, where our thoughts, moods, and biology are monitored and judged by the state.

This is the dark promise behind the newest campaign by Robert F. Kennedy Jr., President Trump’s Secretary of Health and Human Services, to push for a future in which all Americans wear biometric health-tracking devices.

Under the guise of public health and personal empowerment, this initiative is nothing less than the normalization of 24/7 bodily surveillance—ushering in a world where every step, heartbeat, and biological fluctuation is monitored not only by private companies but also by the government.

In this emerging surveillance-industrial complex, health data becomes currency. Tech firms profit from hardware and app subscriptions, insurers profit from risk scoring, and government agencies profit from increased compliance and behavioral insight.

This convergence of health, technology, and surveillance is not a new strategy—it’s just the next step in a long, familiar pattern of control.

Surveillance has always arrived dressed as progress.

Every new wave of surveillance technology—GPS trackers, red light cameras, facial recognition, Ring doorbells, Alexa smart speakers—has been sold to us as a tool of convenience, safety, or connection. But in time, each became a mechanism for tracking, monitoring, or controlling the public.

What began as voluntary has become inescapable and mandatory.

Keep reading

Ireland’s Dangerous War on Encryption

The Irish government’s proposed Communications (Interception and Lawful Access) Bill would significantly expand the state’s ability to monitor digital communications, thereby striking at the very foundation of end-to-end encryption. 

This form of encryption, used by services like WhatsApp, iMessage, and Signal, ensures that only the sender and the recipient can access the content of a message. Under the new bill, Gardaí, the Defence Forces, and the Garda Ombudsman would be allowed to intercept private messages in real time. Achieving this would require altering or bypassing encryption entirely.

Such a measure would introduce a permanent vulnerability into digital infrastructure. Once a system is designed to allow access for one party, others can and will exploit it. 

Backdoors do not stay private. They create a single point of failure that can be used by cybercriminals, hostile foreign governments, or commercial spyware operations. 

The government claims that oversight and warrant requirements will ensure the powers are used responsibly. However, no legal safeguard can address the underlying technical risk created by breaking encryption. 

The presence of a backdoor makes every message on a platform more exposed, whether or not it is the target of surveillance. Encryption cannot be selectively weakened. Any interference compromises the security of the system for all users.

Major technology companies have already taken strong positions against laws that would force them to degrade encryption. 

Apple recently removed some of its data protection features from the UK rather than comply with legislation that would have weakened user privacy. 

Keep reading

How to Stop Google’s Gemini AI from Accessing Your Apps

Google has initiated a broad Android update that enables its Gemini AI assistant to access third-party apps such as WhatsApp, even when users had previously set their devices to block such interactions.

This shift is taking place starting this week and appears to override individual privacy preferences, leaving many Android users without a clear path to opt-out.

According to ArsTechnica, in an email sent ahead of the rollout, Google linked to a support page that confirms “human reviewers (including service providers) read, annotate, and process” data accessed by Gemini.

The email fails to provide direct steps for those hoping to prevent the AI from integrating with their apps.

While it claims users can block specific app interactions, it also states that data will still be stored temporarily for 72 hours.

The message offers conflicting statements about user control. It says Gemini will begin integrating with services like WhatsApp, Phone, and Messages “whether your Gemini apps activity is on or off.”

Yet shortly after, it says, “If you have already turned these features off, they will remain off.” Nowhere in the email or the support pages does Google explain how users can fully remove Gemini from their devices or prevent it from being enabled in the background.

Keep in mind that WhatsApp is an end-to-end encrypted messaging and the app Gemini accesses it undermines that encryption.

Keep reading

Trial begins as Meta investors, Zuckerberg square off over alleged privacy violations

An $8 billion trial by Meta Platforms (META.O) shareholders against Mark Zuckerberg and other current and former company leaders kicked off on Wednesday over claims they illegally harvested the data of Facebook users in violation of a 2012 agreement with the U.S. Federal Trade Commission.

The trial started with a privacy expert for the plaintiffs, Neil Richards of Washington University Law School, who testified about Facebook’s data policies.

“Facebook’s privacy disclosures were misleading,” he told the court.

Jeffrey Zients, White House chief of staff under President Joe Biden and a Meta (META.O) director for two years starting in May 2018, is expected to take the stand later on Wednesday in the non-jury trial before Kathaleen McCormick, chief judge of the Delaware Chancery Court.

The case will feature testimony from Zuckerberg and other billionaire defendants including former Chief Operating Officer Sheryl Sandberg, venture capitalist and board member Marc Andreessen as well as former board members Peter Thiel, Palantir Technologies (PLTR.O) co-founder, and Reed Hastings, co-founder of Netflix (NFLX.O).

A lawyer for the defendants, who have denied the allegations, declined to comment.

Keep reading