Canada’s Bill C-22 Mandates Mass Metadata Surveillance of Canadians

Canada’s Liberal government has introduced Bill C-22, the Lawful Access Act, 2026, a surveillance bill that compels electronic service providers to store Canadians’ metadata for a year and hands police and intelligence agencies new tools to access it.

We obtained a copy of the bill for you here.

The bill follows a failed first attempt, Bill C-2, which collapsed under the weight of near-universal criticism from opposition parties, rights groups, and the tech industry.

This is a mandatory data retention regime that forces companies to hold location data, device information, and other sensitive metadata on every Canadian, not just those suspected of crimes, ready for law enforcement retrieval via warrant. The logic is familiar: build the haystack first, search it later.

Keep reading

AI Can Now Unmask Anonymous Internet Users, New Study Finds

It looks like AI can now unmask any anonymous account on the internet. That’s according to a new study by Simon Lermen (MATS), Daniel Paleka (ETH Zurich), Joshua Swanson (ETH Zurich), Michael Aerni (ETH Zurich), Nicholas Carlini (Anthropic), and Florian Tramèr (ETH Zurich), published on arXiv.

In the paper, “Large-Scale Online Deanonymization with LLMs,” the researchers show that modern large language models (LLMs) can re-identify people behind pseudonymous online accounts at a scale and accuracy that far surpass previous techniques.

The core contribution is an automated deanonymization pipeline powered by LLMs, according to the new study. Instead of relying on structured datasets or hand-engineered features—like earlier attacks on the Netflix Prize dataset—the system works directly on raw, unstructured text.

Given posts, comments, or interview transcripts written under a pseudonym, the pipeline extracts identity-relevant signals, searches for likely matches using semantic embeddings, and then uses higher-level reasoning to verify the most promising candidates while filtering out false positives. The result is a scalable attack that mirrors—and in some cases exceeds—the effectiveness of a dedicated human investigator.

To evaluate their approach, the researchers constructed three datasets with known ground truth. The first links pseudonymous Hacker News users to real-world LinkedIn profiles, relying on cross-platform clues embedded in public text. The second matches users across movie discussion communities on Reddit. The third takes a single Reddit user’s history, splits it into two time-separated profiles, and tests whether the system can reconnect them.

Across all three settings, LLM-based methods dramatically outperformed classical baselines, which often achieved near-zero recall.

The headline numbers are striking. In some experiments, the system achieved up to 68% recall at 90% precision—meaning it correctly identified a substantial portion of targets while keeping false accusations low. Even when matching temporally split Reddit accounts separated by a year, performance remained strong. In contrast, traditional non-LLM approaches struggled to produce meaningful matches. The findings suggest that advances in reasoning and representation learning have transformed deanonymization from a niche, data-hungry attack into a broadly applicable capability.

Keep reading

The App Store Accountability Act Is A Privacy Nightmare Disguised As Child Protection

Washington has discovered a familiar political trick: wrap a flawed policy in the language of protecting children and hope nobody reads the fine print. The latest example is the App Store Accountability Act, a bill championed by lawmakers who appear eager to regulate the internet without understanding how it actually works.

Supporters insist the legislation will protect kids online. In reality, it risks undermining privacy, violating constitutional protections, and creating a cybersecurity disaster in the process.

And remarkably, Congress is pushing forward with this even though federal courts have already signaled that this exact regulatory model is unconstitutional.

The App Store Accountability Act would require app stores to verify the ages of every user and share age information with app developers. On paper, that sounds straightforward. In practice, it would force companies to collect massive amounts of sensitive personal data simply to download everyday apps.

Want to download a weather app? Verify your age.

Want to install a calculator? Verify your age.

Want to read the news? Verify your age.

The practical result is obvious: app stores would be compelled to gather highly sensitive identity data on tens of millions of Americans and then distribute that information to countless third-party developers.

This could be one of the largest digital identity honeypots ever conceived.

Security experts have been warning about this for months. In fact, 419 cybersecurity and privacy academics from 30 countries recently signed an open letter warning that large-scale age verification systems are “dangerous and socially unacceptable” because they create enormous new attack surfaces for hackers and data thieves.

The logic is simple. If every app download requires age verification, that means sensitive identity data must be stored, transmitted, and accessed across thousands of services. Instead of limiting the spread of personal information, the bill effectively multiplies it.

For cybercriminals, it would be a dream target.

Keep reading

UK Parliament Plans ISP Blocking and Age Verification Powers

If you wanted a case study in how modern democracies widen state oversight step by step, Britain has offered a clear example. On March 9, two major surveillance-related bills advanced through Parliament, each pointing toward broader government authority, reduced personal privacy, and tighter limits on protest activity.

These measures advanced through procedural votes and technical amendments that sounded administrative, yet carry consequences for how millions of people use the internet and exercise civic rights.

The main legislative action unfolded in the House of Commons during debate on the Children’s Wellbeing and Schools Bill. Members of Parliament actually rejected amendments from the House of Lords that would have required age verification for VPNs and certain user-to-user services.

But don’t get too excited. Replacement amendments approved by MPs would grant significant new authority to the state. The powers allow the government to require internet service providers to block or restrict children’s access to specific online platforms, impose time-of-day limits on when services can be used, and mandate age verification across nearly any platform that enables users to post or share content.

Keep reading

Australia’s “eSafety” Commissioner Threatens App Stores Over AI Age Verification Deadline

Australia’s eSafety Commissioner Julie Inman Grant is threatening to go after app stores and search engines unless they block AI services that haven’t verified their users’ ages by March 9, 2026.

The ultimatum landed after a Reuters took it upon itself to survey 50 leading text-based AI platforms, and found that 30 of them had taken no visible steps toward compliance with the country’s controversial censorship and surveillance ideas.

“eSafety will use the full range of our powers where there is non-compliance,” a spokesperson said, spelling out that this extends to “action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services.”

What’s actually being built here is bigger than age verification. Five industry codes taking effect March 9 under Australia’s Online Safety Act 2021 impose age-gating requirements across a wide range of services: AI platforms, app distribution services, social media, gaming, dating apps, and any website deemed high-risk for pornography, extreme violence, or self-harm content.

Every category gets its own code. Each non-compliance carries fines of up to A$49.5 million (around US$35 million). The system isn’t aimed at one corner of the internet. It covers most of it.

The age verification requirement doesn’t stand alone. Under a separate amendment to the Online Safety Act passed last year, social media platforms must already ban users under 16 entirely.

The March 9 codes extend that logic further, requiring services to verify the identity of users and filter what they can see based on age. The infrastructure being assembled connects age to identity to content access across the internet as Australians currently use it.

Keep reading

Britain is Trying to Censor Americans – But America is Fighting Back

Ofcom has confirmed it is referring 4chan to a final enforcement decision under the Online Safety Act. The target is a Delaware company that runs an entirely anonymous imageboard from the United States, with no offices, staff, servers or assets in Britain. The demand: install age-verification systems and content filters so that British children cannot access the site or face daily fines levied from London on an American platform. This case is not an outlier. It is the clearest real-world demonstration of what the new generation of “online safety” laws requires: private companies must build automated filters that decide, in advance, which legal speech is too harmful for minors to see. The question the regulators never quite answer is simple: what exactly does the filter catch?

In the early 2020s, a political consensus formed on both sides of the Atlantic: social media is harming children and something must be done. The result in Washington was the Kids’ Online Safety Act (KOSA); in Westminster, the Online Safety Act (OSA), which received Royal Assent in October 2023 and began enforcement in 2025. The political appeal of both measures is genuine. Adolescent mental health deteriorated in the 2010s, parents are alarmed and platforms have appeared indifferent. But good intentions do not make good law, and the form these interventions took is constitutionally and morally indefensible. Both KOSA and the OSA rest on a duty-of-care model: platforms must take “reasonable measures” or implement “proportionate systems” to prevent minors from encountering content associated with depression, anxiety, eating disorders, self-harm and suicide. This is not a regulation of conduct. It is a mandate to suppress speech based on its topic and its predicted emotional effect on a reader: the very definition of content-based regulation.

The American Civil Liberties Union (ACLU) stated the constitutional problem plainly in its July 2023 letter opposing KOSA: the bill “is a content-based regulation of constitutionally protected speech” that “will silence important conversations, limit minors’ access to potentially vital resources and violate the First Amendment”.  Under Reed v. Town of Gilbert, a law is content-based if it “applies to particular speech because of the topic discussed or the idea or message expressed”. Content-based regulations are “presumptively unconstitutional”.

Keep reading

Congress Is Considering Abolishing Your Right to Be Anonymous Online

In August 2024, the Biden administration hosted hundreds of influencers at the White House for the first-ever Creator Economy Conference. Neera Tanden, a senior Biden adviser, took to the stage and bemoaned anonymity online. The influencers alongside her agreed, pushing the idea that anonymous speech on the internet is harmful, and regulation is needed to force the use of real names on social media. The audience whispered excitedly as those on stage spoke about how proposed laws like the Kids Online Safety Act, or KOSA, could unmask every troll. 

This narrative of online safety, particularly in relation to children, has become central to the bipartisan effort to censor and deanonymize the internet for everyone. Today, a package of a dozen “child online safety” bills is moving forward in the House of Representatives with bipartisan support. The laws, framed as a way to crack down on harmful content and make the internet safer, would force social media companies to enact invasive identity verification measures in order to keep children from accessing online spaces.

The problem is that there’s no way to reliably verify someone’s age without verifying who they are. A platform cannot magically discern that a user is 16 without collecting identifying information, whether through government documents such as a passport, payment information like a credit card, or other identity-disclosing data. Whether that data is stored by the platform itself or outsourced to a vendor, the result is always the same: A user’s offline identity is forever linked with their online behavior.

Stripping anonymity from the internet would constitute one of the most sweeping rollbacks of civil rights in recent history. It would allow for unprecedented levels of mass surveillance and censorship, endangering the most marginalized members of society. Whistleblowers exposing corporate wrongdoing could be tracked and fired, government employees speaking out about illegal behavior or bad policies could face prosecution, and activists organizing protests could be identified and surveilled before ever setting foot on the street.

Keep reading

Judge Blocks Virginia’s One-Hour Social Media Limit for Minors as Unconstitutional

A federal judge has blocked Virginia’s attempt to limit minors to one hour of social media per day, ruling the law violates the First Amendment. The decision is a significant check on a growing wave of state legislation that treats time spent reading, watching, and communicating online as something the government can ration.

Judge Patricia Tolliver Giles issued the preliminary injunction Friday, finding that Virginia “does not have the legal authority to block minors’ access to constitutionally protected speech until their parents give their consent by overriding a government-imposed default limit.”

We obtained a copy of the opinion for you here.

The ruling halts enforcement of Senate Bill 854, which carried fines of $7,500 per violation and required platforms to use “commercially reasonable methods” to verify user ages.

The law’s problem wasn’t just the one-hour cap. It was how the cap worked. The state set the default, and parents could ask to change it. That structure puts the government, not families, in control of baseline access to speech. Parental consent here overrides a government restriction that shouldn’t exist in the first place.

Giles found the law over-inclusive in a way that illustrates exactly how blunt these restrictions are. “A minor would be barred from watching an online church service if it exceeded an hour on YouTube,” she wrote, “yet, that same minor is allowed to watch provider-selected religious programming exceeding an hour in length on a streaming platform.”

The law doesn’t regulate harm. It regulates platforms, which means it catches protected speech indiscriminately.

NetChoice, the trade association whose members include Meta, YouTube, Snap, Reddit, and TikTok, sued to stop the law. In November, NetChoice argued that “Virginia has with one broad stroke restricted access to valuable sources for speaking and listening, learning about current events and otherwise exploring the vast realms of human thought and knowledge.” The judge agreed they had standing to pursue a permanent block and found they were likely to succeed on the merits.

Virginia’s attorney general is defending the law alongside 29 other states from both parties. A spokesperson said: “We look forward to continuing to enforce laws that empower parents to protect their children from the proven harms that can come through social media.” The new Democratic attorney-general Jay Jones, who took office in January, had announced he intended to fully enforce the law signed by his Republican predecessor, Glenn Youngkin.

Keep reading

Scientists warn against crappy age verification: ‘if implemented without careful consideration… the new regulation might cause more harm than good’

As age verification becomes more commonplace across the web, there are some trying to oppose its rollout on security and privacy grounds. An open letter signed by over 400 researchers and scientists arguing the many reasons why age verification (and most especially the current age assurance technology) isn’t all it’s cracked up to be is now available to read in full.

Here’s a precis on the whole thing: Governments across the world are adopting legislation to ensure usage or compliance with age assurance methods, in the name of keeping kids off the bad parts of the web. That sounds like a good idea until you look into the details. Those details suggest these are often haphazardly applied and with little regard for privacy and data protection.

The open letter outlines a few key arguments:

How easily age verification can be bypassed. This being evident by Discord’s age verification, provided by K-id, which could be bypassed by using Sam’s face in Death Stranding. As the open letter points out, it’s possible to lie about one’s age, trick a system, or buy age-verified credentials online. VPNs are also widely available and prove an easy way to bypass any and all age assurance methods, even if access to said VPNs is age-restricted.

How unreliable age estimation can be. All while potentially necessitating large-scale and invasive data collection or widespread use of government IDs at every online interaction for any semblance of effectiveness. As the letter notes, “We conclude that age assessment presents an inherent disproportionate risk of serious privacy violations and discrimination, without guarantees of effectiveness.”

How it necessitates a global trust infrastructure. This being one of the main goals of the EU’s digital identity wallet, though only pan-EU, being used as a common foundation for all member states to meet one another for age assurance. Though as the letter suggests, “even if such a trust infrastructure would exist, checks can be circumvented by acquiring valid certificates or using VPNs, as long as age assurance regulations are not universally enforced by all affected services.”

How it can push users to lesser-known, potentially dangerous websites. By enforcing age assurance, and with the larger, more responsible websites complying, there is a chance of pushing users to lesser-known, potentially dangerous or scam websites. Following the rollout of the UK’s Online Safety Act, one of the first investigations it launched was into porn websites that did not immediately comply with the new rules for age verification checks. Other websites chose to turn off services to the UK altogether.

Keep reading

California Law Forces Age-Tracking Into Every Operating System by 2027

California wants to build a surveillance layer into every device its residents touch. Assembly Bill 1043, signed by Governor Gavin Newsom and taking effect January 1, 2027, requires every operating system provider to collect age information from users at account setup and broadcast that data to app developers through a real-time API.

Windows, macOS, Android, iOS, Linux distributions, Valve’s SteamOS: if it runs an operating system, it’s covered by this overreaching law.

The proposals are particularly dumb for open-source Linux operating systems. Linux exists specifically because some people want computing that doesn’t surveil them. That’s not incidental to why the platform exists; it’s foundational.

Distributions like Arch, Debian, and Gentoo have no centralized account infrastructure by design. Users download ISOs from mirrors, modify source code freely, and run systems that report to nobody.

Keep reading