As Expected, a Hearing on Kids Online Safety Becomes a Blueprint for Digital ID

The latest congressional hearing on “protecting children online” opened as you would expect: the same characters, the same script, a few new buzzwords, and a familiar moral panic to which the answer is mass surveillance and censorship.

The Subcommittee on Commerce, Manufacturing, and Trade had convened to discuss a set of draft bills packaged as the “Kids Online Safety Package.” The name alone sounded like a software update against civil liberties.

The hearing was called “Legislative Solutions to Protect Children and Teens Online.” Everyone on the dais seemed eager to prove they were on the side of the kids, which meant, as usual, promising to make the internet less free for everyone else.

Rep. Gus Bilirakis (R-FL), who chaired the hearing, kicked things off by assuring everyone that the proposed bills were “mindful of the Constitution’s protections for free speech.”

He then reminded the audience that “laws with good intentions have been struck down for violating the First Amendment” and added, with all the solemnity of a man about to make that same mistake again, that “a law that gets struck down in court does not protect a child.”

They know these bills are legally risky, but they’re going to do it anyway.

Bilirakis’s point was echoed later by House Energy & Commerce Committee Chairman Brett Guthrie (R-KY), who claimed the bills had been “curated to withstand constitutional challenges.” That word, curated, was doing a lot of work.

Guthrie went on to insist that “age verification is needed…even before logging in” to trigger privacy protections under COPPA 2.0.

The irony of requiring people to surrender their private information in order to be protected from privacy violations was lost in the shuffle.

Keep reading

Macron Wants To Go Full “Ministry Of Truth” With Draconian Censorship Grab

French President Emmanuel Macron is facing fierce pushback from conservative voices within France over his renewed drive to grant the state sweeping new censorship powersBarron’s reports.

On Friday, Macron once again raised the alarm about so-called “disinformation” spreading on social media, insisting that parliament grant authorities the ability to immediately block content deemed “false information.” As if the existing arsenal of censorship tools weren’t enough, the left-wing president now wants to establish a “professional certification” system that would effectively create an official, state-approved class of media outlets—separating those that toe the government’s ethical line from those that refuse to do so.

France’s right-wing press has reacted with outrage, with Vincent Bolloré’s Journal du Dimanche denouncing Macron’s “totalitarian drift” on free speech and warning of “the temptation of a ministry of truth.”

Bolloré-owned CNews and Europe 1 were equally scathing, with popular presenter Pascal Praud accusing the president of acting out of personal resentment, declaring the initiative comes from a “president unhappy with his treatment by the media and who wants to impose a single narrative.”

National Rally leader Jordan Bardella also delivered a blistering rebuke, saying in a statement, “Tampering with freedom of expression is an authoritarian temptation, which corresponds to the solitude of a man… who has lost power and seeks to maintain it by controlling information.”

Bruno Retailleau, head of the Republicans in the Senate, echoed the warning on X: “[N]o government has the right to filter the media or dictate the truth.”

Keep reading

EU Push to Make Message Scanning Permanent Despite Evidence of Failure and Privacy Risks

The European Union has a habit of turning its worst temporary ideas into permanent fixtures. This time it is “Chat Control 1.0,” the 2021 law that lets tech companies scan everyone’s private messages in the name of child protection.

It was supposed to be a stopgap measure, a temporary derogation of privacy rights until proper evidence came in.

Now, if you’ve been following our previous reporting, you’ll know the Council wants to make it permanent, even though the Commission’s own 2025 evaluation report admits it has no evidence the thing actually works.

We obtained a copy of the report for you here.

The report doesn’t even hide the chaos. It confesses to missing data, unproven results, and error rates that would embarrass a basic software experiment.

Yet its conclusion jumps from “available data are insufficient” to “there are no indications that the derogation is not proportionate.” That is bureaucratic logic at its blandest.

The Commission’s Section 3 conclusion includes the sentence “the available data are insufficient to provide a definitive answer” on proportionality, followed immediately by “there are no indications that the derogation is not proportionate.”

In plain language, they can’t prove the policy isn’t violating rights, but since they can’t prove that it is, they will treat it as acceptable.

The same report admits it can’t even connect the dots between all that scanning and any convictions. Section 2.2.3 states: “It is not currently possible…to establish a clear link between these convictions and the reports submitted by providers.” Germany and Spain didn’t provide usable figures.

Keep reading

Meta Pushes Canada for App Store Age Verification ID Laws

Meta is working to convince the Canadian government to introduce new laws that would make age verification mandatory at the app store level. The company has been lobbying Ottawa for months and says it has received positive feedback from officials drafting online safety legislation.

To support its push, Meta paid Counsel Public Affairs to poll Canadians on what kinds of digital safety measures they want for teens.

The poll found that 83 percent of parents favor requiring app stores to confirm users’ ages before app downloads.

Meta highlighted those results, saying “the Counsel data clearly indicates that parents are seeking consistent, age-appropriate standards that better protect teens and support parents online. And the most effective way to understand this is by obtaining parental approval and verifying age on the app store.”

Rachel Curran, Meta Canada’s director of public policy, described the idea as “by far the most effective, privacy-protective, efficient way to determine a user’s age.”

That phrase may sound privacy-conscious, but in practice, the plan would consolidate control over personal data inside a small circle of corporations such as Meta, Apple, and Google, while forcing users to identify themselves to access basic online services.

Google has criticized Meta’s proposal, calling it an attempt to avoid direct responsibility. “Time and time again, all over the world, you’ve seen them push forward proposals that would have app stores change their practices and do something new without any change by Meta,” said Kareem Ghanem, Google’s senior director of government affairs.

Behind these corporate disputes lies a much bigger question: should anyone be required to verify their identity in order to use the internet?

Keep reading

Congress Pushes for Nationwide Internet Age Verification Plan.

Republican lawmakers are proposing a new way to enforce accountability on tech companies to comply with age verification laws, despite resistance from websites like Pornhub. The App Store Accountability Act (ASA), introduced by Senator Mike Lee (R-UT) and Representative John James (R-MI), proposes a different model: requiring app stores themselves to verify users’ ages and pass that information to apps when they are downloaded.

The bill is part of a broader push in Congress to tighten safeguards for minors online and has earned support from major tech companies, including Facebook parent company Meta, Pinterest, and Snap. Pinterest CEO Bill Ready argues that one standard would simplify the process and reduce the confusion created by a patchwork of state requirements. “The need for a federal standard is urgent,” he said.

“I think most people at most of these companies probably do want to protect kids,” Sen. Lee said, adding that support from tech companies like Pinterest “makes a big difference.”

However, the proposal faces resistance from civil liberties groups and digital rights advocates. Critics warn that compulsory age verification could limit access to lawful online content, raising First Amendment concerns. They also cite significant privacy risks, arguing that systems requiring users to submit sensitive personal information could expose them to data breaches or misuse.

Some major websites have rejected attempts to enforce online age verification. Pornhub has withdrawn its services from states that require government-issued ID or similar credentials for access to adult material. The company argued that these laws push users toward unregulated platforms while forcing supposedly legitimate sites to collect data they would prefer not to hold.

In 2025, the Supreme Court upheld a state age-verification law for explicit content in Texas, with the majority concluding that states may require age checks to prevent minors from viewing harmful material.

Supporters of federal action contend that the ASA would avoid the growing compliance difficulties posed by differing state regulations. Sen. Lee has stated, “I don’t believe that there’s anything unlawful, unconstitutional, or otherwise problematic about this legislation,” arguing that an app-store-centered approach would reduce repeated verification across multiple platforms.

Keep reading

Tokyo Court Ruling Against Cloudflare Sets “Dangerous Precedent” for Internet Infrastructure Liability

Cloudflare has been ordered by a Tokyo District Court to pay 500 million yen, about 3.2 million US dollars, after judges ruled the company liable for aiding copyright infringement.

The decision, as reported by TorrentFreak, brought by a coalition of Japan’s largest manga publishers, challenges the long-held understanding that network infrastructure providers are not responsible for what passes through their systems.

It also signals a growing international push to make companies like Cloudflare police online content, an approach that could redefine how the open internet operates.

The publishers, Shueisha, Kodansha, Kadokawa, and Shogakukan, argued that Cloudflare’s global network, which caches and accelerates websites, helped pirate manga sites distribute illegal copies of their work. They said Cloudflare’s failure to verify customer identities allowed those sites to hide “under circumstances where strong anonymity was secured,” a factor the court said contributed to its finding of liability.

Cloudflare said it will appeal, calling the ruling a threat to fairness and due process and warning that it could have broad implications for the future of internet infrastructure. The company argues that its conduct complies with global norms and that it has no direct control over the content its clients publish or distribute.

The legal fight between Cloudflare and Japan’s major publishers began in 2018. The publishers asked the Tokyo District Court to intervene, claiming Cloudflare’s technology enabled piracy sites to thrive. They wanted the company to sever ties with the offending domains.

In 2019, a partial settlement was reached. The deal, later disclosed, required Cloudflare to stop replicating content from sites only after Japanese courts officially declared them illegal.

That agreement quieted the conflict for a time, but it did not resolve the larger question of whether a network service should be required to decide which content is lawful.

By early 2022, the same publishers returned to court, alleging that Cloudflare had failed to take “necessary measures” against known infringing sites.

They filed a new claim targeting four specific works and sought around four million dollars in damages. They also asked for an order that would compel Cloudflare to terminate service for illegal sites.

Keep reading

Missouri Locks the Web Behind a “Harmful” Content ID Check

Starting November 30, 2025, people in Missouri will find the digital world reshaped: anyone wishing to visit websites containing “harmful” adult material will need to prove they are at least 18 years old by showing ID.

This new requirement marks Missouri’s entry into the growing group of US states adopting age verification laws for online content. Yet the move does more than restrict access; it raises serious questions about how much personal data people must surrender just to browse freely.

For many, that tradeoff is likely to make privacy tools like VPNs a near necessity rather than a choice.

The law defines its targets broadly. Any site or app where over one-third of the material is classified as “harmful to minors” must block entry until users confirm their age.

Those who do not comply risk penalties that can reach $10,000 a day, with violations categorized as “unfair, deceptive, fraudulent, or otherwise unlawful practices.”

To meet these standards, companies are permitted to check age through digital ID systems, government-issued documents such as driver’s licenses or passports, or existing transactional data that proves a person’s age.

Keep reading

European Parliament agrees on resolution calling for minimum age on social media

The European Parliament on Wednesday agreed on a resolution which calls for a default minimum age of 16 on social media to ensure “age-appropriate online engagement”.

According to a draft published in October, the legislation asked for “the establishment of a harmonised European digital age limit of 16 years old as the default threshold under which access to online social media platforms should not be allowed unless parents or guardians have authorised their children otherwise”.

It also called for a harmonised European digital age limit of 13, under which no minor could access social media platforms, and an age limit of 13 for video-sharing services and “AI companions”.

The Parliament resolution is not legally binding and does not set policy.

Keep reading

We Must Resist The Rise Of A Global Censorship Regime

The ordeal of Finnish Parliamentarian Päivi Räsänen, who just stood trial a third time – after being acquitted twice – for a 2019 tweet in which she simply shared a Scripture verse and her faith-based views on marriage and sexuality, is a warning to all who value the right to speak freely across the world.

When governments claim the power to police opinions, even peaceful expressions of faith can be dragged through the courts.

And now this promises to be a much more pervasive reality in Europe as a result of the 2022 Digital Services Act (DSA). Ahead of the European Union’s review of the DSA, 113 international experts committed to free speech wrote to the European Commission highlighting the law’s incompatibility with free expression, citing the possibility of worldwide takedown orders. Räsänen was a signatory to the letter, alongside a former vice president of Yahoo Europe, a former U.S. senator, and politicians, academics, lawyers, and journalists from around the globe.

The DSA gives the E.U. authority to enforce moderation of “illegal content” on platforms and search engines with over 45 million monthly users. It enables bureaucrats to control online speech at scale under the guise of “safety” and “protecting democracy.”

However, E.U. member states may have different definitions of illegal content. Thus, under the law, anything deemed illegal under the speech laws of any one E.U. member state could potentially be removed across all of Europe. That means the harshest censorship laws in Europe could soon govern the entire continent, and possibly the internet worldwide. And if platforms fail to comply, they face billions in fines, thus providing clear incentive to censor and none to promote free speech.

Late last month, the E.U. announced that Meta and TikTok will face fines of up to 6 percent of their global sales for accusations of violating the DSA on matters related to transparency. But the well-founded fear is that this law—which grants sweeping authority to European regulators to control online speech across such platforms—including X, YouTube, and Facebook—will enable the kind of censorship endured by Räsänen on a global scale.

Further, citizens in countries outside of the E.U., like the United States, are at risk of facing new levels of censorship, because the DSA applies to large online digital platforms and search engines accessed within the E.U. but that have a global presence. It explicitly states its extraterritorial applicability as it covers platforms used by people “that have their place of establishment or are located in the Union, irrespective of where the providers of those intermediary services [the platforms] have their place of establishment.”

Platforms are incentivized to adapt their international content moderation policies to E.U. censorship. If those platforms deem something “illegal” under E.U. rules, that content may be banned everywhere, even in countries with strong free speech protections.

Keep reading

EU Parliament Votes for Mandatory Digital ID and Age Verification, Threatening Online Privacy

The European Parliament has voted to push the European Union closer to a mandatory digital identification system for online activity, approving a non-binding resolution that endorses EU-wide age verification rules for social media, video platforms, and AI chatbots.

Though presented as a child protection measure, the text strongly promotes the infrastructure for universal digital ID, including the planned EU Digital Identity Wallet and an age verification app being developed by the European Commission.

Under the proposal, every user would have to re-identify themselves at least once every three months to continue using major platforms. Children under 13 would be banned entirely, and teenagers between 13 and 16 would require parental approval to participate online.

Keep reading