UK Crime Agency Backs “Upload Prevention” Plan to Scan Encrypted Messages

Britain’s Internet Watch Foundation (IWF) has decided that privacy needs a chaperone.

The group has launched a campaign urging tech companies to install client-side scanning in encrypted apps, a proposal that would make every private message pass through a local checkpoint before being sent.

The IWF calls it an “upload prevention” system. Critics might call it the end of private communication disguised as a safety feature.

Under the plan, every file or image shared on a messaging app would be checked for sexual abuse material (CSAM).

The database would be maintained by what the IWF describes as a “trusted body.” If a match is found, the upload is blocked before encryption can hide it. The pitch is that nothing leaves the device unless it’s cleared, but that is like claiming a home search is fine as long as the police do not take anything.

As has been shown in Germany, this technology would not only catch criminals. Hashing errors and false positives happen, which means lawful material could be stopped before it ever leaves a phone.

And once the scanning infrastructure is built, there is nothing stopping it from being redirected toward new categories of “harmful” or “illegal” content. The precedent would be set: your phone would no longer be a private space.

Although the IWF is running this show, it has plenty of political muscle cheering it on.

Safeguarding Minister Jess Phillips praised the IWF campaign, saying: “It is clear that the British public want greater protections for children online and we are working with technology companies so more can be done to keep children safer. The design choices of platforms cannot be an excuse for failing to respond to the most horrific crimes…If companies don’t comply with the Online Safety Act they will face enforcement from the regulator. Through our action we now have an opportunity to make the online world safer for children, and I urge all technology companies to invest in safeguards so that children’s safety comes first.”

That endorsement matters. It signals that the government is ready to use the already-controversial Online Safety Act to pressure companies into surveillance compliance.

Ofcom, armed with new regulatory powers under that Act, can make “voluntary” ideas mandatory with little more than a memo.

The UK’s approach to online regulation is becoming increasingly invasive. The government recently tried to compel Apple to install a back door into its encrypted iCloud backups under the Investigatory Powers Act. Apple refused and instead pulled its most secure backup option from British users, leaving the country with weaker privacy than nearly anywhere else in the developed world.

Keep reading

FBI Seeks To Unmask Anonymous Web Archiving Service Owner

The subpoena, dated last Tuesday and posted publicly on Archive.today’s X account, states it relates to a federal criminal investigation being conducted by the FBI, as The Verge reported. However, the document provides no specific details about what alleged crime is under investigation.

The FBI is requesting comprehensive identifying information from Tucows, including customer or subscriber name, address of service, and billing address associated with Archive.today, per The Verge report.

Beyond basic contact details, the subpoena demands an extensive array of data such as telephone connection records, including incoming and outgoing calls and SMS or MMS records, payment information like credit card or bank account numbers, internet connectivity session times and durations, device identifiers, IP addresses, and details about services used such as email, cloud computing, and gaming services.

The subpoena instructs Tucows not to disclose its existence indefinitely, as any such disclosure could interfere with an ongoing investigation and enforcement of the law, as recounted by Gizmodo. 

That request became moot when Archive.today publicly posted the document. Journalist Max Blumenthal, editor of The Grayzone, drew attention to the subpoena on X, emphasizing that Archive.today is used by journalists and researchers to “document edits to articles, bypass subscription walls and avoid giving traffic to the failing corporate media.”

Keep reading

Europe’s AI Surveillance Race Against the Rules That Protect Privacy

Europol’s deputy executive director, Jürgen Ebner, is urging the European Union to relax its own legal restraints on artificial intelligence, arguing that the rules designed to protect citizens are slowing down police innovation.

He wants a system that allows the agency to skip lengthy rights checks in “emergency” situations and move ahead with new AI tools before the usual data protection reviews are complete.

Ebner told POLITICO that criminals are having “the time of their life” with “their malicious deployment of AI,” while Europol faces months of delay because of required legal assessments.

Those safeguards, which include evaluations under the GDPR and the EU’s AI Act, exist to stop unaccountable automation from taking hold in law enforcement.

Yet Ebner’s comments reveal a growing tendency inside the agency to treat those same checks as obstacles rather than vital protections.

He said the current process can take up to eight months and claimed that speeding it up could save lives.

But an “emergency” fast track for AI surveillance carries an obvious danger. Once such shortcuts are created, the idea of what qualifies as an emergency can expand quickly.

Technologies that monitor, predict, or profile people can then slip beyond their intended use, leaving citizens exposed to automated systems that make judgments about them without transparency or recourse.

Over the past decade, Europol has steadily increased its technical capabilities, investing heavily in large-scale data analysis and decryption tools.

These systems are presented as essential for fighting cross-border crime, yet they also consolidate immense quantities of personal data under centralized control.

Without strong oversight, such tools can move from focused investigation toward widespread data collection and surveillance.

European Commission President Ursula von der Leyen has already promised to double Europol’s workforce and turn it into a central hub for combating organized crime, “navigating constantly between the physical and digital worlds.”

A legislative proposal to strengthen the agency’s powers is planned for 2026, raising questions about how much authority and access to data Europol will ultimately gain.

Ebner, who oversees governance at Europol, said that “almost all investigations” now involve the internet and added that the cost of technology has become a “massive burden on law enforcement agencies.”

He urged stronger collaboration with private technology firms, stating that “artificial intelligence is extremely costly. Legal decryption platforms are costly. The same is to be foreseen already for quantum computing.”

Keep reading

Palantir, Fractal And Your Personal Data Privacy – Get used to being used, because YOU are the product

Who controls the data the government collected from you for a generation?

Your insurance company collected data on your driving – so did your Lexus – who owns that data?

You told your doctor about controlled substances you used – and now it gets brought up in an interview.

If you can’t exclude someone from using your data, then you don’t control it. That means you really don’t own it. It’s that simple.

What does “own” mean here, let’s define the terms.

Owning the data means you can do anything you want with it – share it, sell it, mine it or build an A.I. language model with it.

From birth until the last Social Security check gets cashed, your data is collected by federal and state agencies, corporations and of course the internet.

Your teen daughter puts every waking moment on Facebook or Instagram – so who owns those hundreds of images?

TSA Pre Check, Medicare/Medicaid, Social Security, government or military retirement, Tri-Care, veterans hospitals, and of course, the IRS – gather more data about every citizen than has ever been gathered in the history of mankind.

Each agency gathers different data, at different times, for slightly different purposes. And those purposes may change over time.

Who owns the rights to that data?

It’s a far stickier question than you think.

The knee jerk response is the government owns the data. They collected it for their purposes, so it’s theirs.

The government will certainly say so.

Keep reading

Wisconsin Lawmakers Propose VPN Ban and ID Checks on Adult Sites

Wisconsin legislators have found a new villain in their quest to save people from themselves: the Virtual Private Network.

The state’s latest moral technology initiative, split into Assembly Bill 105 and Senate Bill 130, would force adult websites to verify user ages and ban anyone connecting through a VPN.

It passed the Assembly in March and now waits in the Senate, where someone will have to pretend this is enforceable.

Supporters are selling the plan as a way to “protect minors from explicit material.”

The bill’s machinery reads like a privacy demolition project written by people who still call tech support to reset passwords.

The law would apply to any site that “knowingly and intentionally publishes or distributes material harmful to minors.” It then defines that material as anything lacking “serious literary, artistic, political, or scientific value for minors.”

The wording is broad enough to rope in half the internet, yet somehow manages to exclude “bona fide news” (as to be determined by the state) and cloud platforms that don’t create the content themselves.

Whether that covers social media depends on who you ask: lawyers, lobbyists, or whichever intern wrote the definitions section.

The bill instructs websites to delete verification data after access is granted or denied.

That sounds good until you recall how the tech industry handles deletion promises.

Au10tix left user records exposed for a year after pledging to delete them within 30 days. Tea suffered multiple breaches despite assurances of immediate deletion. In the real world, “deleted” often means “archived on an unsecured server until a hacker finds it.”

The headline feature is a rule penalizing anyone who uses a VPN to access restricted material. VPNs encrypt internet traffic and disguise user locations, which lawmakers apparently see as a threat to order.

The logic is that if people can hide their IP addresses, the state can’t check their ID to ensure they’re old enough to view certain content. That’s technically true and philosophically disturbing.

Officials in other places are already cheering this idea. Michigan introduced a proposal requiring internet providers to detect and block VPN traffic.

If Wisconsin adopts the rule, VPN users would become collateral damage. Journalists, activists, and everyday users who rely on encryption for safety would be swept up in the ban.

Keep reading

Russia Moves to Mandate State Biometric ID for Online Age Verification

Russian lawmakers are moving forward with a proposal that would make the country’s biometric and e-government systems the mandatory gatekeepers for online age verification.

If implemented, the measure would tie access to adult or “potentially harmful” content directly to a person’s verified state identity, dissolving any remaining expectation of online anonymity.

The plan, discussed on October 28, is being marketed as a child protection initiative. Officials insist it is designed to keep minors away from dangerous material, yet the scope of what qualifies is remarkably broad.

According to TechRadar, one official included pornography, violent or profane videos, and even “propaganda of antisocial behavior” in the list of restricted content.

The main part of the proposal is the use of the “Gosuslugi” digital services portal, which already functions as Russia’s main interface for state verification.

This system connects directly to the Unified System of Identification and Authentication (ESIA) and the national Unified Biometrics System (UBS), both of which are controlled by the government.

State Duma deputy Anton Nemkin, a former FSB officer, suggested that these networks “could be used to verify age without directly transmitting passport data to third-party platforms.”

In effect, the state would become the universal intermediary between citizens and the internet.

Legal experts specializing in digital rights argue that this initiative continues a long-established trajectory.

Since 2012, when Russia began constructing its online censorship framework under the pretext of protecting minors, each new regulation has chipped away at personal privacy while expanding government visibility into everyday digital life.

The current proposal also fits neatly within Moscow’s broader strategy of “digital sovereignty.”

Deputy Chairman of the State Duma Committee on Information Policy Andrei Svintsov recently claimed that every Russian internet user will lose their anonymity within “three years, five at most,” TechRadar reported.

This vision aligns with another state project approved in June, the development of a national “super app” integrating digital ID, government services, and payment systems, which would even let users “confirm one’s age to a supermarket cashier.”

Keep reading

Manufacturer issues remote kill command to disable smart vacuum after engineer blocks it from collecting data — user revives it with custom hardware and Python scripts to run offline

An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device. That’s when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn’t consented to. The user, Harishankar, decided to block the telemetry servers’ IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after. After a lengthy investigation, he discovered that a remote kill command had been issued to his device.

He sent it to the service center multiple times, wherein the technicians would turn it on and see nothing wrong with the vacuum. When they returned it to him, it would work for a few days and then fail to boot again. After several rounds of back-and-forth, the service center probably got tired and just stopped accepting it, saying it was out of warranty. Because of this, he decided to disassemble the thing to determine what killed it and to see if he could get it working again.

Since the A11 was a smart device, it had an AllWinner A33 SoC with a TinaLinux operating system, plus a GD32F103 microcontroller to manage its plethora of sensors, including Lidar, gyroscopes, and encoders. He created PCB connectors and wrote Python scripts to control them with a computer, presumably to test each piece individually and identify what went wrong. From there, he built a Raspberry Pi joystick to manually drive the vacuum, proving that there was nothing wrong with the hardware.

Keep reading

California’s Mileage-Based Road Charge: What It Is, How It Would Work, and Why Privacy Risks Increase

As I’ve been investigating California’s mileage-based road charge, it’s clear this isn’t just a policy about road funding. It’s the quiet rollout of a system built to track, record, and bill movement itself. The state says it’s about fairness and modernization, but the deeper I look, the more it resembles a permanent surveillance and revenue network disguised as transportation reform. What’s being tested today through Caltrans and the State Transportation Agency is the blueprint for total mileage monitoring tied to personal identity.

California is advancing a plan to replace the state gas tax with a mileage-based road charge that taxes drivers by distance traveled. The effort is led by the California State Transportation Agency and Caltrans under a multi-year Road Charge program first authorized by SB 1077 in 2014 and extended and refined by SB 339 in 2021. The state’s stated purpose is to stabilize road funding as fuel-tax revenue declines.

As of October 2025, the program is still in the pilot and data-gathering phase. No statewide mandate or rate schedule has been enacted, but Caltrans has already built the structure necessary for full implementation. What is being called a “test” now is the foundation for what will later become mandatory reporting.

Keep reading

Google Adds Age Check Tech as Texas, Utah, and Louisiana Enforce Digital ID Laws

Google is preparing for a new era of digital age checks as state-level rules in TexasUtah, and Louisiana begin to reshape how app stores operate.

To get ahead of these requirements, the company has introduced the Play Signals API in beta, a system built to help developers adapt to laws that will soon mandate age-based controls.

Starting in early 2026, each of the three states will enforce its own version of the App Store Accountability Act.

Texas’s law takes effect first, followed by Utah and Louisiana a few months later. Each statute requires app marketplaces to confirm the age range of their users through “commercially reasonable” verification methods.

Developers will be responsible for interpreting those signals and tailoring their apps accordingly. In some regions, they will also have to inform Google Play if a product update could require new parental consent.

For testing purposes, the company is providing a FakeAgeSignalsManager so that developers can simulate data before the laws officially apply.

Google’s rollout of its new Play Signals API is part of a broader shift toward a verified internet, one where digital access is increasingly tied to proof of identity.

The company’s beta API is being framed as a neutral compliance tool, but its function sets the stage for a more monitored web.

While the stated purpose is child safety and regulatory compliance, the architecture being built threatens to erode one of the internet’s core principles, pseudonymity.

The data points that determine whether someone is over 13 or over 18 can easily evolve into a persistent set of identifiers, linking activity across apps, accounts, and even devices. Once these signals are standardized, nothing prevents them from being combined with advertising, analytics, or behavioral tracking systems.

The result could be a world where age verification quietly becomes identity verification, and where “commercially reasonable” checks amount to permanent user profiling.

Keep reading

Lawmakers Want Proof of ID Before You Talk to AI

It was only a matter of time before someone in Congress decided that the cure for the internet’s ills was to make everyone show their papers.

The “Guidelines for User Age-verification and Responsible Dialogue Act of 2025,” or GUARD Act, has arrived to do just that.

We obtained a copy of the bill for you here.

Introduced by Senators Josh Hawley and Richard Blumenthal, the bill promises to “protect kids” from AI chatbots that allegedly whisper bad ideas into young ears.

The idea: force every chatbot developer in the country to check users’ ages with verified identification.

The senators call it “reasonable age verification.”

That means scanning your driver’s license or passport before you can talk to a digital assistant.

Keeping in mind that AI is being added to pretty much everything these days, the implications of this could be far-reaching.

Keep reading