“Kids Off Social Media Act” Opens the Door to Digital ID by Default

Congress is once again stepping into the role of digital caretaker, this time through the Kids Off Social Media Act, with a proposal from Rep. Anna Paulina Luna that seeks to impose federal rules on how young people interact with the world.

The house companion bill (to go along with the senate bill) attempts to set national limits on who can hold social media accounts, how platforms may structure their systems, and what kinds of data they are allowed to use when dealing with children and teenagers.

Framed as a response to growing parental concern, the legislation reflects a broader push to regulate online spaces through age-based access and design mandates rather than direct content rules.

The proposal promises restraint while quietly expanding Washington’s reach into the architecture of online speech. Backers of the bill will insist it targets corporate behavior rather than expression itself. The bill’s mechanics tell a more complicated story.

The bill is the result of a brief but telling legislative evolution. Early versions circulated in 2024 were framed as extensions of existing child privacy rules rather than participation bans. Those drafts focused on limiting data collection, restricting targeted advertising to minors, and discouraging algorithmic amplification, while avoiding hard access restrictions or explicit age enforcement mandates.

That posture shifted as the bill gained bipartisan backing. By late 2024, lawmakers increasingly treated social media as an inherently unsafe environment for children rather than a service in need of reform. When the bill was reintroduced in January 2025, it reflected that change. The new version imposed a categorical ban on accounts for users under 13, restricted recommendation systems for users under 17, and strengthened enforcement through the Federal Trade Commission and state attorneys general, with Senate sponsorship led by Ted Cruz and Brian Schatz.

Keep reading

EU Law Could Extend Scanning of Private Messages Until 2027

The European Parliament is considering another extension of Chat Control 1.0, the “temporary” exemption that allows communications providers to scan private messages (under the premise of preventing child abuse) despite the protections of the EU’s ePrivacy Directive.

draft report presented by rapporteur Birgit Sippel (S&D) would prolong the derogation until April 3, 2027.

At first glance, the proposal appears to roll back some of the most controversial elements of Chat Control. Text message scanning and automated analysis of previously unknown images would be explicitly excluded. Supporters have framed this as a narrowing of scope.

However, the core mechanism of Chat Control remains untouched.

The draft continues to permit mass hash scanning of private communications for so-called “known” material.

According to former MEP and digital rights activist Patrick Breyer, approximately 99 percent of all reports generated under Chat Control 1.0 originate from hash-based detection.

Almost all of those reports come from a single company, Meta, which already limits its scanning to known material only. Under the new proposal, Meta’s practices would remain fully authorized.

As a result, the draft would not meaningfully reduce the volume, scope, or nature of surveillance. The machinery keeps running, with a few of its most visibly controversial attachments removed.

Hash scanning is often portrayed as precise and reliable. The evidence points in the opposite direction.

First, the technology is incapable of understanding context or intent. Hash databases are largely built using US legal definitions of illegality, which do not map cleanly onto the criminal law of EU Member States.

The German Federal Criminal Police Office (BKA) reports that close to half of all chat control reports are criminally irrelevant.

Each false positive still requires assessment, documentation, and follow-up. Investigators are forced to triage noise rather than pursue complex cases involving production, coercion, and organized abuse.

The strategic weakness is compounded by a simple reality. Offenders adapt. As more services adopt end-to-end encryption, abusers migrate accordingly. Since 2022, the number of chat-based reports sent to police has fallen by roughly 50 percent, not because abuse has declined, but because scanning has become easier to evade.

“Both children and adults deserve a paradigm shift in online child protection, not token measures,” Breyer said in a statement to Reclaim The Net.

“Whether looking for ‘known’ or ‘unknown’ content, the principle remains: the post office cannot simply open and scan every letter at random. Searching only for known images fails to stop ongoing abuse or rescue victims.”

Keep reading

EU Targets VPNs as Age Checks Expand

Australia’s under-16 social media restrictions have become a practical reference point for regulators who are moving beyond theory and into enforcement.

As the system settles into routine use, its side effects are becoming clearer. One of the most visible has been the renewed political interest in curbing tools that enable private communication, particularly Virtual Private Networks. That interest carries consequences well beyond “age assurance.”

January 2026 briefing we obtained from the European Parliamentary Research Service traces a sharp rise in VPN use following the introduction of mandatory age checks.

The report notes “a significant surge in the number of virtual private networks (VPNs) used to bypass online age verification methods in countries where these have been put in place by law,” placing that trend within a broader policy environment where “protection of children online is high on the political agenda.”

Australia’s experience fits this trajectory. As age gates tighten, individuals reach for tools that reduce exposure to monitoring and profiling. VPNs are the first port of call in that response because they are widely available, easy to use, and designed to limit third-party visibility into online activity.

The EPRS briefing offers a clear description of what these tools do. “A virtual private network (VPN) is a digital technology designed to establish a secure and encrypted connection between a user’s device and the internet.”

It explains that VPNs hide IP addresses and route traffic through remote servers in order to “protect online communications from interception and surveillance.” These are civil liberties functions, not fringe behaviors, and they have long been treated as legitimate safeguards in democratic societies.

Keep reading

Dad claims 16-year-old daughter took her own life after meeting a predator on Roblox, slams game platform beloved by kids

Penelope Sokolowski was just 16 years old when she took her own life last February.

Her father, Jason, believes her suicide was the culmination of a grooming process that began on Roblox, the game platform beloved by kids — with some 170,000 users under the age of 13, according to company data from 2023.

“We kind of thought we were covering all the bases,” Jason told The Post, noting that his family had used a third-party app to monitor Penelope’s online activity.

Jason alleges that his only child was contacted by a predator on Roblox who coerced her into cutting his name into her chest and sending videos of herself bloodied from self-harm — and who, ultimately, sent Penelope down a spiral that culminated in her death.

The girl was 7 or 8 years old when she first signed up for Roblox, players rove around online worlds and can chat with other users.

“I’d come in and sit in the room with her and see what she was doing, ask who those people were,” Jason said, recalling Penelope drawing an anime-style sketch for a friend she’d made on Roblox.

“As a dad I thought, oh, this is nice, she’s artistic, and she’s made artistic friends,” he added. “But I didn’t understand what Roblox was and its effect on her.”

The dad, who works in the film industry in Vancouver, British Columbia, separated from Penelope’s mother and moved out of the family home when the girl was 13.

He recalls how Penelope’s grades began to tumble and, when she was 14, he noticed scars from self-inflicted cuts on her arms, which she had been covering with bracelets and his oversized hockey jerseys. 

Penelope confided that she had been recruited into a self-harm group via Roblox, but assured her father she had moved on.

But not long after her 16th birthday, she took her own life.

Later, when Jason opened up his daughter’s cell phone, he found what he describes as a “crime scene.”

According to the dad, there were messages spanning two years with a person who egged on her self-destruction. Jason believes Penelope met this person on Roblox and then began privately conversing with them over Discord — sometimes for hours.

In one exchange, Penelope sent a photo of her chest, offering to cut herself there but worrying she couldn’t go “too deep.” Minutes later, she followed up with an image of the predator’s Discord user name written across her chest in bloodied letters.

In other images, she had carved the numbers “764” into her body. Jason believes Penelope had been contacted by a member of 764, described by the FBI as a “violent online group” that targets minors and grooms them into committing egregious acts of self-harm and violence.

Members of 764 reportedly troll platforms like Roblox looking for victims they can persuade — via grooming or sextortion — into hurting themselves.

“They are grooming girls to do whatever it is they can get a girl to do, whether it’s nudes or cuts or gore or violence,” Jason said. “[Penelope] was brainwashed all the way through.”

Keep reading

Congressional Report Warns Britain Is Exporting Censorship Worldwide

The government of British Prime Minister Keir Starmer has been directly named in a new United States congressional report that condemns Britain for adopting what it calls “copycat censorship laws,” warning that the country’s digital regulations now pose “a direct threat” to free speech.

The document, published by the United States House Committee on the Judiciary, describes an expanding campaign led by the European Commission to impose “strict digital censorship laws” on global technology platforms.

Lawmakers in Washington identified the Online Safety Act as the clearest example of this approach spreading beyond the European Union.

The Act was introduced with the stated goal of improving online safety, but it requires platforms such as X, Reddit, and TikTok to install age verification systems and remove material deemed harmful by regulators.

US lawmakers say these provisions give the British government broad authority to dictate what can and cannot be said online.

Keep reading

The Digital Media Oligarchy: Who Owns Online News? 

When Ben Bagdikian, an esteemed journalist and early FAIR contributor, published his groundbreaking book The Media Monopoly in 1983, he painted a troubling picture of US media consolidation, reporting that 50 corporations controlled the media business. With each reprint, that number dwindled (FAIR.org6/1/87). When FAIR replicated his analysis in 2011 (Extra!10/11), it stood at 20.

Now, over 40 years after the initial release of The Monopoly Media, the media landscape has transformed drastically. Even Bagdikian’s later editions, written at the dawn of the internet, could not fully anticipate how profoundly digital technology would reconfigure the media oligarchy.

“News” is increasingly synonymous with online news. Over half the US public (56%) say that they “often” get news through their digital devices—compared to less than 1 in 3 (32%) who often get news from TV, 1 in 9 from radio and only 1 in 14 from print publications like newspapers or magazines (Pew, 9/25/25).

Which raises the question: Who owns the leading online news sites—and, by extension, largely shapes the ideas and information that reach millions of Americans?

Each month, Press Gazette, a London-based magazine for the journalism industry, ranks the top 50 news websites in the US in order of monthly visits, based on data from the marketing firm Similarweb. FAIR tallied Press Gazette’s results over a 12-month span, from December 2024 to November 2025, to get a figure for total US visits to major news sites over that period: 45.6 billion.

More than half of those visits, nearly 25.5 billion, went to news sites controlled by just seven families or corporate entities.

Keep reading

Wyoming Introduces First-Ever Foreign Censorship Shield Bill

Wyoming has taken a historic step to insulate American speech from foreign interference with the introduction of the Wyoming Guaranteeing Rights Against Novel International Tyranny and Extortion (GRANITE) Act, House Bill 0070, which would be the first US law designed to create a private right of action against foreign censorship enforcement.

Representative Daniel Singh introduced the bill, declaring that “foreign governments have decided they can threaten American citizens and American companies for speech that is protected by our Constitution…Wyoming is drawing a line in the sand.” The measure aims to establish Wyoming as a refuge for free expression and digital innovation, directly challenging what lawmakers describe as an escalating campaign of transnational censorship pressure.

The legislation provides that any Wyoming resident, business, or US person with servers in the state may sue foreign governments or international organizations that attempt to enforce censorship demands against them for First Amendment protected speech. Each violation could cost the offending entity at least $1 million or 10% of its US revenue, whichever is higher.

The GRANITE Act prohibits Wyoming courts and agencies from recognizing or enforcing foreign censorship judgments. It also forbids any state cooperation with such orders, including extradition requests or data demands linked to speech that is constitutionally protected in the US. Under the bill, no Wyoming authority may help a foreign state investigate, penalize, or prosecute individuals over lawful expression.

We obtained a copy of the bill for you here.

Keep reading

Nine Bureaucracies Walk Into Your Browser and Ask for ID

By the time you’re reading this, there’s a decent chance that somewhere, quietly and with a great deal of bureaucratic back-patting, someone is trying to figure out exactly how old you are. And not because they’re planning a surprise party.

Not because you asked them to. But because the nine horsemen of the regulatory apocalypse have decided that the future of a “safe” internet depends on everyone flashing their ID like they’re trying to get into an especially dull nightclub.

This is the nightmare of “age assurance,” a term so bloodlessly corporate you can practically hear it sighing into its own PowerPoint.

This is a sprawling, gelatinous lump of biometric estimation, document scans, and AI-ified guesswork, stitched together into one big global initiative under the cheery-sounding Global Online Safety Regulators Network, or GOSRN. Catchy.

Formed in 2022, presumably after someone at Ofcom had an especially boring lunch break, GOSRN now boasts nine national regulators, including the UK, France, Australia, and that well-known digital superpower, Fiji, who have come together to harmonize policies on how to tell whether someone is too young to look at TikTok for adults.

The group is currently chaired by Ireland’s Coimisiún na Meán.

This month, this merry band of regulators released a “Position Statement on Age Assurance and Online Safety Regulation.”

We obtained a copy of the document for you here.

Inside this gem of a document is a plan to push shared age-verification principles across borders, including support for biometric analysis, official ID checks, and the general dismantling of anonymity for the greater good of child protection.

Keep reading

Rand Paul Turns Against Section 230, Citing YouTube Video Accusing Him of Taking Money From Maduro

Sen. Rand Paul (R–Ky.) has long been one of the few refreshing voices out of Washington, D.C., when it comes to free speech, including free speech on social media and elsewhere in the digital realm. He was one of just two senators to vote against FOSTA, the law that started the trend of trying to carve out Section 230 exceptions for every bad thing.

As readers of this newsletter know, Section 230 has been fundamental to the development and flourishing of free speech online.

Now, Paul has changed his mind about it. “I will pursue legislation toward” ending Section 230’s protections for tech companies, the Kentucky Republican wrote in the New York Post this week.

A Section 230 Refresher

For those who need a refresher (if not, skip to the next section): Section 230 of the Communications Act protects tech companies and their users from frivolous lawsuits and spurious charges. It says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” If someone else is speaking (or posting), they—not you or Instagram or Reddit or YouTube or any other entity—are legally liable for that speech.

Politicians, state attorneys general, and people looking to make money off tech companies that they blame for their troubles hate Section 230. It stops the latter—including all sorts of ambulance-chasing lawyers—from getting big payouts from tech companies over speech for which these companies merely served as an unwitting conduit. It stops attorneys general from making good on big, splashy lawsuits framed around fighting the latest moral panic. And it prevents politicians from being more in control of what we all can say online.

If a politician doesn’t like something that someone has posted about them on the internet, doesn’t like their Google search results, or resents the fact that people can speak freely—and sometimes falsely—about political issues, it would be a lot easier to censor whatever it is that’s irking them in a world without Section 230. They could simply go to a tech platform hosting that speech and threaten a lawsuit if it was not removed.

Tech platforms might very well win many such lawsuits on First Amendment grounds, if they had the resources to fight them and chose that route. But it would be a lot easier, in many cases, for them to simply give in and do politicians’ bidding, rather than fight a protracted lawsuit. Section 230 gives them the impetus to resist and ensures that any suits that go forward will likely be over quickly, in their favor.

But here’s the key: Section 230 does not stop authorities from punishing companies for violations of federal law, and it does not stop anyone from going after the speakers of any illegal content. If someone posts a true threat on Facebook, they can still be hauled in for questioning about it. If someone uses Google ads to commit fraud, they’re not magically exempted from punishment for that fraud. And if someone posts a defamatory rant about you on X, you can still sue them for that rant.

Keep reading

Congress Revives Kids Off Social Media Act, a “Child Safety” Bill Poised to Expand Online Digital ID Checks

Congress is once again positioning itself as the protector of children online, reviving the Kids Off Social Media Act (KOSMA) in a new round of hearings on technology and youth.

We obtained a copy of the bill for you here.

Introduced by Senators Ted Cruz and Brian Schatz, the bill surfaced again during a Senate Commerce Committee session examining the effects of screen time and social media on mental health.

Cruz warned that a “phone-based childhood” has left many kids “lost in the virtual world,” pointing to studies linking heavy screen use to anxiety, depression, and social isolation.

KOSMA’s key provisions would ban social media accounts for anyone under 13 and restrict recommendation algorithms for teens aged 13 to 17.

Pushers of the plan say it would “empower parents” and “hold Big Tech accountable,” but in reality, it shifts control away from families and toward corporate compliance systems.

The bill’s structure leaves companies legally responsible for determining users’ ages, even though it does not directly require age verification.

The legal wording is crucial. KOSMA compels platforms to delete accounts if they have “actual knowledge” or what can be “fairly implied” as knowledge that a user is under 13.

That open-ended standard puts enormous pressure on companies to avoid errors.

The most predictable outcome is a move toward mandatory age verification systems, where users must confirm their age or identity to access social platforms. In effect, KOSMA would link access to everyday online life to a form of digital ID.

That system would not only affect children. It would reach everyone. To prove compliance, companies could require users to submit documents such as driver’s licenses, facial scans, or other biometric data.

The infrastructure needed to verify ages at scale looks almost identical to the infrastructure needed for national digital identity systems. Once built, those systems rarely stay limited to a single use. A measure framed as protecting kids could easily become the foundation for a broader identity-based internet.

Keep reading