Privacy For The Powerful, Surveillance For The Rest: EU’s Proposed Tech Regulation Goes Too Far

Last month, we lamented California’s Frontier AI Act of 2025. The Act favors compliance over risk management, while shielding bureaucrats and lawmakers from responsibility. Mostly, it imposes top-down regulatory norms, instead of letting civil society and industry experts experiment and develop ethical standards from the bottom up.

Perhaps we could dismiss the Act as just another example of California’s interventionist penchant. But some American politicians and regulators are already calling for the Act to be a “template for harmonizing federal and state oversight.” The other source for that template would be the European Union (EU), so it’s worth keeping an eye on the regulations spewed out of Brussels.

The EU is already way ahead of California in imposing troubling, top-down regulation. Indeed, the EU Artificial Intelligence Act of 2024 follows the EU’s overall precautionary principle. As the EU Parliament’s internal think tank explains, “the precautionary principle enables decision-makers to adopt precautionary measures when scientific evidence about an environmental or human health hazard is uncertain and the stakes are high.” The precautionary principle gives immense power to the EU when it comes to regulating in the face of uncertainty — rather than allowing for experimentation with the guardrails of fines and tort law (as in the US). It stifles ethical learning and innovation. Because of the precautionary principle and associated regulation, the EU economy suffers from greater market concentration, higher regulatory compliance costs, and diminished innovation — compared to an environment that allows for experimentation and sensible risk management. It is small wonder that only four of the world’s top 50 tech companies are European.

From Stifled Innovation to Stifled Privacy

Along with the precautionary principle, the second driving force behind EU regulation is the advancement of rights — but cherry-picking from the EU Charter of Fundamental Rights of rights that often conflict with others. For example, the EU’s General Data Protection Regulation (GDPR) of 2016 was imposed with the idea of protecting a fundamental right to personal data protection (this is technically separate from the right to privacy, and gives the EU much more power to intervene — but that is the stuff of academic journals). The GDPR ended up curtailing the right to economic freedom.

This time, fundamental rights are being deployed to justify the EU’s fight against child sexual abuse. We all love fundamental rights, and we all hate child abuse. But, over the years, fundamental rights have been deployed as a blunt and powerful weapon to expand the EU’s regulatory powers. The proposed Child Sex Abuse regulation (CSA) is no exception. What is exceptional, is the extent of the intrusion: the EU is proposing to monitor communications among European citizens, lumping them all together as potential threats rather than as protected speech that enjoys a prima facie right to privacy.

As of 26 November 2025, the EU bureaucratic machine has been negotiating the details of the CSA. In the latest draft, mandatory scanning of private communications has thankfully been removed, at least formally. But there is a catch. Providers of hosting and interpersonal communication services must identify, analyze, and assess how their services might be used for online child sexual abuse, and then take “all reasonable mitigation measures.” Faced with such an open-ended mandate and the threat of liability, many providers may conclude that the safest — and most legally prudent — way to show they have complied with the EU directive is to deploy large-scale scanning of private communications.

The draft CSA insists that mitigation measures should, where possible, be limited to specific parts of the service or specific groups of users. But the incentive structure points in one direction. Widespread monitoring may end up as the only viable option for regulatory compliance. What is presented as voluntary today risks becoming a de facto obligation tomorrow.

In the words of Peter Hummelgaard, the Danish Minister of Justice: “Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse. This is completely unacceptable.” No one disputes the gravity or turpitude of the problem. And yet, under this narrative, the telecommunications industry and European citizens are expected to absorb dangerous risk-mitigation measures that are likely to involve lost privacy for citizens and widespread monitoring powers for the state.

The cost, we are told, is nothing compared to the benefit.

After all, who wouldn’t want to fight child sexual abuse? It’s high time to take a deep breath. Child abusers should be punished severely. This does not dispense a free society from respecting other core values.

But, wait. There’s more…

Keep reading

House Lawmakers Unite in Moral Panic, Advancing 18 “Kids’ Online Safety” Bills That Expand Surveillance and Weaken Privacy

The House Energy & Commerce Subcommittee on Commerce, Manufacturing, and Trade spent its latest markup hearing on Thursday proving that if there’s one bipartisan passion left in Washington, it’s moral panic about the internet.

Eighteen separate bills on “kids’ online safety” were debated, amended, and then promptly advanced to the full committee. Not one was stopped.

Ranking Member Jan Schakowsky (D) set the tone early, describing the bills as “terribly inadequate” and announcing she was “furious.”

She complained that the package “leaves out the big issues that we are fighting for.” If it’s not clear, Schakowsky is complaining that the already-controversial bills don’t go far enough.

Eighteen bills now move forward, eight of which hinge on some form of age verification, which would likely require showing a government ID. Three: App Store Accountability (H.R. 3149), the SCREEN Act (H.R. 1623), and the Parents Over Platforms Act (H.R. 6333), would require it outright.

The other five rely on what lawmakers call the “actual knowledge” or “willful disregard” standards, which sound like legalese but function as a dare to platforms: either know everyone’s age, or risk a lawsuit.

The safest corporate response, of course, would be to treat everyone as a child until they’ve shown ID.

Keep reading

Tourists to US would have to reveal five years of social media activity under new Trump plan

Tourists to the United States would have to reveal their social media activity from the last five years, under new Trump administration plans.

The mandatory new disclosures would apply to the 42 countries whose nationals are currently permitted to enter the US without a visa, including longtime US allies Britain, France, Australia, Germany and Japan.

In a notice published on Tuesday, the US Customs and Border Protection agency (CBP) said it would also require any telephone numbers used by visitors over the same period, and any email addresses used in the last decade, as well as face, fingerprint, DNA and iris biometrics. It would also ask for the names, addresses, birthdates and birthplaces of family members, including children.

CBP said the new changes to the Electronic System for Travel Authorization (Esta) application were required in order to comply with an executive order issued by Donald Trump on the first day of his new term. In it, the US president called for restrictions to ensure visitors to the US “do not bear hostile attitudes toward its citizens, culture, government, institutions, or founding principles”.

The plan would throw a wrench into the World Cup, which the US is co-hosting with Canada and Mexico next year. Fifa has said it expects will attract 5 million fans to the stadiums, and millions more visitors to the US, Canada and Mexico.

Tourism to the US has already dropped dramatically in Trump’s second term, as the president has pushed a draconian crackdown on immigrants, including recent moves to ban all asylum claims and to stop migration entirely from more than 30 countries.

California tourism authorities are predicting a 9% decline in foreign visits to the state this year, while Hollywood Boulevard in Los Angeles reported a 50% fall in foot traffic over the summer. Las Vegas, too, has been badly hit by a decline in visits, worsened by the rise of mobile gambling apps.

Statistics Canada said Canadian residents who made a return trip to the US by car dropped 36.9% in July 2025 compared with the same month in 2024, while commercial airline travel from Canada dropped by 25.8% in July compared with the previous year, as relations between the two countries plummeted.

The US has already started squeezing foreign tourism in other ways, slapping an additional $100 fee per foreign visitor per day to visit national parks, such as the Grand Canyon and Yosemite, on top of the regular admission fees. Nor will national parks have free admission on Martin Luther King Jr Day any longer: they will now only be free to visit on Trump’s birthday.

The notice gives members of the public two months to comment. The Department of Homeland Security, under which CBP operates, did not respond to media outlets’ requests for comment. Meta, which owns two of the biggest social media platforms – Facebook and Instagram – did not immediately respond to questions.

The Trump administration had already launched a more widespread crackdown on visas for people hoping to live and work in the country. US Citizenship and Immigration Services (USCIS) said in August that it will start looking for “anti-American” views, including on social media, when assessing the applications of people wanting to live in the US.

The administration has also demanded that prospective foreign students unlock their social media profiles; those who refuse will be suspected of hiding their activity. Several high-profile foreign-born students have been detained for voicing support for Palestinians. The social-media policy also applies to anyone applying for an H1-B visa for skilled workers, which are now also subject to a new eye-watering $100,000 fee.

As recently as last week, the administration told consular officials to deny visas to anyone who might have worked in factchecking or content moderation, for example at a social media company, accusing them in blanket terms of being “responsible for, or complicit in, censorship or attempted censorship of protected expression in the US”.

It has suggested reducing visa lengths for foreign journalists from five years to eight months, and has started demanding any visitors who are not from the 42 visa-exempt countries pay a new $250 fee.

CBP claims the authority to search the devices of any prospective entrant to the US. Although you can refuse, you may then be denied entry. While CBP said in 2024 it searched about 47,000 devices of the 420 million people who crossed the US border that year, experts said the number may be much higher under the new Trump administration.

There were already fears that the World Cup could become chaotic if US immigration raids continue at the same torrid pace.

Keep reading

The Encryption Double Life of Canberra

The Australian government is quietly relying on encrypted messaging to conduct sensitive business, even as it hardens its stance against public use of secure communications.

While the public faces increasing surveillance and legal pressure for using end-to-end encryption, senior officials are steering policy conversations into private digital spaces, shielding them from scrutiny under Freedom of Information (FOI) laws.

Since midyear, ministerial staff have been advising lobbyists, peak bodies and industry groups to avoid email altogether and submit reform proposals through the encrypted messaging app Signal.

Some of these exchanges have been requested using disappearing messages, ensuring there is no record retained on government systems.

Several sources confirmed to the Saturday Paper that this guidance is now common across a number of policy areas.

In addition to Signal, stakeholders have been encouraged to use phone calls for detailed conversations and limit the content of any written communications.

In at least one case, after a formal meeting, the follow-up came in the form of a verbal summary rather than the usual written recap sent by email.

While the government has maintained formal channels for official submissions, a secondary mode of policymaking is taking shape.

This mode operates out of reach of archiving protocols and public oversight.

One participant in this informal process described it as an effort to protect the early phases of policy development from outside scrutiny, arguing that “fluid thoughts and ideas” should be exempt from public record.

Yet the effect of these practices is to create a shadow layer of government consultation that leaves no trace and falls outside the accountability mechanisms intended to safeguard democratic participation.

Keep reading

This FTC Workshop Could Legitimize the Push for Online Digital ID Checks

In January 2026, the Federal Trade Commission plans to gather a small army of “experts” in Washington to discuss a topic that sounds technical but reads like a blueprint for a new kind of internet.

Officially, the event is about protecting children. Unofficially, it’s about identifying everyone.

The FTC says the January 28 workshop at the Constitution Center will bring together researchers, policy officials, tech companies, and “consumer representatives” to explore the role of age verification and its relationship to the Children’s Online Privacy Protection Act, or COPPA.

It’s all about collecting and verifying age information, developing technical systems for estimation, and scaling those systems across digital environments.

In government language, that means building tools that could determine who you are before you click anything.

The FTC suggests this is about safeguarding minors. But once these systems exist, they rarely stop where they start. The design of a universal age-verification network could reach far beyond child safety, extending into how all users identify themselves across websites, platforms, and services.

The agency’s agenda suggests a framework for what could become a credential-based web. If a website has to verify your age, it must verify you. And once verified, your information doesn’t evaporate after you log out. It’s stored somewhere, connected to something, waiting for the next access request.

The federal effort comes after a wave of state-level enthusiasm for the same idea. TexasUtahMissouriVirginia, and Ohio have each passed laws forcing websites to check the ages of users, often borrowing language directly from the European UnionAustralia, and the United Kingdom. Those rules require identity documents, biometric scans, or certified third parties that act as digital hall monitors.

In these states, “click to enter” has turned into “show your papers.”

Many sites now require proof of age, while others test-drive digital ID programs linking personal credentials to online activity.

The result is a slow creep toward a system where logging into a website looks a lot like crossing a border.

Keep reading

Twin Brothers Charged with Plotting to Delete Government Databases and Steal Private Info

Two Virginia twin brothers were arrested for their alleged roles in destroying government databases hosted by a federal government contractor, the Justice Department said on Wednesday.

Muneeb and Sohaib Akhter, both 34 years old, were indicted in November for allegedly plotting to destroy databases used to store government information.

Muneeb was charged with conspiracy to commit computer fraud and to destroy records, two counts of computer fraud, theft of government records, and two counts of aggravated identity theft, while Sohaib was charged with conspiracy to commit computer fraud, destroying records, and computer fraud.

Bloomberg News reported in May how the two former federal contractors had compromised data across many government agencies, which includes the Internet Revenue Service (IRS) and the General Services Administration (GSA).

The Akhter brothers also pled guilty in 2016 to federal charges of conspiracy regarding data breaches at the State Department and a cosmetics company. The two worked at Opexus, a federal contractor that helped process government records.

Keep reading

EU Push to Make Message Scanning Permanent Despite Evidence of Failure and Privacy Risks

The European Union has a habit of turning its worst temporary ideas into permanent fixtures. This time it is “Chat Control 1.0,” the 2021 law that lets tech companies scan everyone’s private messages in the name of child protection.

It was supposed to be a stopgap measure, a temporary derogation of privacy rights until proper evidence came in.

Now, if you’ve been following our previous reporting, you’ll know the Council wants to make it permanent, even though the Commission’s own 2025 evaluation report admits it has no evidence the thing actually works.

We obtained a copy of the report for you here.

The report doesn’t even hide the chaos. It confesses to missing data, unproven results, and error rates that would embarrass a basic software experiment.

Yet its conclusion jumps from “available data are insufficient” to “there are no indications that the derogation is not proportionate.” That is bureaucratic logic at its blandest.

The Commission’s Section 3 conclusion includes the sentence “the available data are insufficient to provide a definitive answer” on proportionality, followed immediately by “there are no indications that the derogation is not proportionate.”

In plain language, they can’t prove the policy isn’t violating rights, but since they can’t prove that it is, they will treat it as acceptable.

The same report admits it can’t even connect the dots between all that scanning and any convictions. Section 2.2.3 states: “It is not currently possible…to establish a clear link between these convictions and the reports submitted by providers.” Germany and Spain didn’t provide usable figures.

Keep reading

Congress Goes Parental on Social Media and Your Privacy

Washington has finally found a monster big enough for bipartisan unity: the attention economy. In a moment of rare cross-aisle cooperation, lawmakers have introduced two censorship-heavy bills and a tax scheme under the banner of the UnAnxious Generation package.

The name, borrowed from Jonathan Haidt’s pop-psychology hit The Anxious Generation, reveals the obvious pitch: Congress will save America’s children from Silicon Valley through online regulation and speech controls.

Representative Jake Auchincloss of Massachusetts, who has built a career out of publicly scolding tech companies, says he’s going “directly at their jugular.”

The plan: tie legal immunity to content “moderation,” tax the ad money, and make sure kids can’t get near an app without producing an “Age Signal.” If that sounds like a euphemism for surveillance, that’s because it is.

The first bill, the Deepfake Liability Act, revises Section 230, the sacred shield that lets platforms host your political rants, memes, and conspiracy reels without getting sued for them.

Under the new proposal, that immunity becomes conditional on a vague “duty of care” to prevent deepfake porn, cyberstalking, and “digital forgeries.”

TIME’s report doesn’t define that last term, which could be a problem since it sounds like anything from fake celebrity videos to an unflattering AI meme of your senator. If “digital forgery” turns out to include parody or satire, every political cartoonist might suddenly need a lawyer on speed dial.

Auchincloss insists the goal is accountability, not censorship. “If a company knows it’ll be liable for deepfake porn, cyberstalking, or AI-created content, that becomes a board-level problem,” he says. In other words, a law designed to make executives sweat.

But with AI-generated content specifically excluded from Section 230 protections, the bill effectively redefines the internet’s liability protections.

Keep reading

Smith Prosecutors Admitted They Risked Lawsuits To Illegally Grab Senators’ Private Records

In a move that has raised eyebrows among conservatives, special counsel Jack Smith and his team have been revealed to have pursued the phone records of Republican congress members, despite being cautioned about the potential for “litigation risk,” according to internal emails.

As reported by Western Journal, the emails, which were made public on Tuesday by Senators Chuck Grassley of Iowa and Ron Johnson of Wisconsin, expose the deliberations of prosecutors as they decided which congress members to target with subpoenas. These actions were part of the “Arctic Frost” investigation, which later evolved into Smith’s 2020 election case against then-President Donald Trump.

John Keller, the former head of the Justice Department Public Integrity Section, had warned the prosecutors in a May 2023 email, stating, “As you are aware, there is some litigation risk regarding whether compelled disclosure of toll records of a Member’s legislative calls violates the Speech or Debate Clause in the D.C. Circuit.” Despite this caution, Keller gave the green light for the subpoenas.

Keep reading

EU Parliament Votes for Mandatory Digital ID and Age Verification, Threatening Online Privacy

The European Parliament has voted to push the European Union closer to a mandatory digital identification system for online activity, approving a non-binding resolution that endorses EU-wide age verification rules for social media, video platforms, and AI chatbots.

Though presented as a child protection measure, the text strongly promotes the infrastructure for universal digital ID, including the planned EU Digital Identity Wallet and an age verification app being developed by the European Commission.

Under the proposal, every user would have to re-identify themselves at least once every three months to continue using major platforms. Children under 13 would be banned entirely, and teenagers between 13 and 16 would require parental approval to participate online.

Keep reading