40 State Attorneys General Want To Tie Online Access to ID

A bloc of 40 state and territorial attorneys general is urging Congress to adopt the Senate’s version of the controversial Kids Online Safety Act, positioning it as the stronger regulatory instrument and rejecting the House companion as insufficient.

The Act would kill online anonymity and tie online activity and speech to a real-world identity.

Acting through the National Association of Attorneys General, the coalition sent a letter to congressional leadership endorsing S. 1748 and opposing H.R. 6484.

We obtained a copy of the letter for you here.

Their request centers on structural differences between the bills. The Senate proposal would create a federally enforceable “Duty of Care” requiring covered platforms to mitigate defined harms to minors.

Enforcement authority would rest with the Federal Trade Commission, which could investigate and sue companies that fail to prevent minors from encountering content deemed to cause “harm to minors.”

That framework would require regulators to evaluate internal content moderation systems, recommendation algorithms, and safety controls.

S. 1748 also directs the Secretary of Commerce, the FTC, and the Federal Communications Commission to study “the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.”

This language moves beyond platform-level age gates and toward infrastructure embedded directly into hardware or operating systems.

Age verification at that layer would not function without some form of credentialing. Device-level verification would likely depend on digital identity checks tied to government-issued identification, third-party age verification vendors, or persistent account authentication systems.

That means users could be required to submit identifying information before accessing broad categories of lawful online speech. Anonymous browsing depends on the ability to access content without linking identity credentials to activity.

Keep reading

UK Fines US Platform Imgur For Lack of Age Verification

Imgur’s decision to suspend access for UK users in September 2025 was an early signal that regulatory pressure was building. The platform’s parent company has now learned the financial cost of that pressure.

The UK Information Commissioner’s Office has fined MediaLab, which operates image hosting company Imgur, £247,590 ($337,000) for violations of the UK GDPR.

According to the regulator, the company processed children’s personal data without a lawful basis, failed to implement effective age assurance measures, and did not complete a required data protection impact assessment.

The ICO’s findings focus on how children under 13 were able to use the service without verified parental consent or “any other lawful basis.”

The regulator also determined that the company lacked meaningful age checks. That means the platform did not reliably verify whether users were children before collecting and processing their data. Additionally, MediaLab did not conduct a formal risk assessment to examine how its service might affect minors’ rights and freedoms.

“MediaLab failed in its legal duties to protect children, putting them at unnecessary risk,” said UK Information Commissioner John Edwards. “For years, it allowed children to use Imgur without any effective age checks, while collecting and processing their data, which in turn exposed them to harmful and inappropriate content. Age checks help organizations keep children’s personal information safe.”

He added, “Ignoring the fact that children use these services, while processing their data unlawfully, is not acceptable. Companies that choose to ignore this can expect to face similar enforcement action.”

The ICO says it has the authority to impose fines of up to £17.5 million or 4 percent of an organization’s annual global revenue, whichever is higher. In setting the penalty at £247,590, the office stated that it “took into consideration the number of children affected by this breach, the degree of potential harm caused, the duration of the contraventions, and the company’s global turnover.”

This enforcement action sits within a broader UK policy change toward mandatory online age verification.

Lawmakers and regulators have increasingly pressed platforms to deploy age assurance tools that can include document checks, facial age estimation, or third-party verification services. All-privacy invasive.

While positioned as child protection measures, these systems often require users to submit government-issued identification or biometric data simply to access online services.

Keep reading

“Kids Off Social Media Act” Opens the Door to Digital ID by Default

Congress is once again stepping into the role of digital caretaker, this time through the Kids Off Social Media Act, with a proposal from Rep. Anna Paulina Luna that seeks to impose federal rules on how young people interact with the world.

The house companion bill (to go along with the senate bill) attempts to set national limits on who can hold social media accounts, how platforms may structure their systems, and what kinds of data they are allowed to use when dealing with children and teenagers.

Framed as a response to growing parental concern, the legislation reflects a broader push to regulate online spaces through age-based access and design mandates rather than direct content rules.

The proposal promises restraint while quietly expanding Washington’s reach into the architecture of online speech. Backers of the bill will insist it targets corporate behavior rather than expression itself. The bill’s mechanics tell a more complicated story.

The bill is the result of a brief but telling legislative evolution. Early versions circulated in 2024 were framed as extensions of existing child privacy rules rather than participation bans. Those drafts focused on limiting data collection, restricting targeted advertising to minors, and discouraging algorithmic amplification, while avoiding hard access restrictions or explicit age enforcement mandates.

That posture shifted as the bill gained bipartisan backing. By late 2024, lawmakers increasingly treated social media as an inherently unsafe environment for children rather than a service in need of reform. When the bill was reintroduced in January 2025, it reflected that change. The new version imposed a categorical ban on accounts for users under 13, restricted recommendation systems for users under 17, and strengthened enforcement through the Federal Trade Commission and state attorneys general, with Senate sponsorship led by Ted Cruz and Brian Schatz.

Keep reading

Discord to Demand Face Scan or ID to Access All Features

Discord is preparing to make age classification a constant background process across its platform. Beginning next month, every account will default to a teen-appropriate experience unless the user takes steps to prove adulthood.

Age determination will sit underneath routine activity, shaping what people can see, say, and join.

For accounts that are not verified as adult, access will narrow immediately. Age-restricted servers and channels will be blocked, voice participation in live “stage” channels will be disabled, and automated filters will apply to content Discord identifies as graphic or sensitive.

Friend requests from unfamiliar users will trigger warning prompts, and direct messages from unknown accounts will be routed into a separate inbox.

Core features such as direct messages with known contacts and servers without age restrictions will continue to function. Age-restricted servers will effectively disappear until verification is completed, including servers that a user joined years earlier.

The global rollout reflects a broader regulatory environment that is pushing platforms toward more aggressive age controls. Discord has already tested similar systems.

Last year, age checks were introduced in the UK and Australia.

For many adult users, the concern is less about access to content and more about surveillance and the ability to communicate anonymously. Verification systems introduce new forms of monitoring, whether through documents, facial analysis, or ongoing behavioral assessment.

Keep reading

Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda

A familiar storyline is hardening into regulatory doctrine across Europe: frame social media use as addiction, then require platforms to reengineer themselves around age segregation and digital ID.

The European Commission’s preliminary case against TikTok, announced today, shows how that narrative is now being operationalized in policy, with consequences that reach well beyond one app.

European regulators have accused TikTok of breaching the Digital Services Act by relying on what they describe as “addictive design” features, including infinite scroll, autoplay, push notifications, and personalized recommendations.

Officials argue these systems drive compulsive behavior among children and vulnerable adults and must be structurally altered.

What sits beneath that argument is a quieter requirement. To deliver different “safe” experiences to minors and adults, platforms must first determine who is a minor and who is not. Any mandate to offer different experiences to minors and adults depends on a reliable method of telling those groups apart.

Platforms cannot apply separate algorithms, screen-time limits, or nighttime restrictions without determining a user’s age with a level of confidence regulators will accept.

Commission spokesman Thomas Regnier described the mechanics bluntly, saying TikTok’s design choices “lead to the compulsive use of the app, especially for our kids, and this poses major risks to their mental health and wellbeing.” He added: “The measures that TikTok has in place are simply not enough.”

The enforcement tool behind those statements is the Digital Services Act, the EU’s platform rulebook that authorizes Brussels to demand redesigns and impose fines of up to 6% of global annual revenue.

Keep reading

EU Targets VPNs as Age Checks Expand

Australia’s under-16 social media restrictions have become a practical reference point for regulators who are moving beyond theory and into enforcement.

As the system settles into routine use, its side effects are becoming clearer. One of the most visible has been the renewed political interest in curbing tools that enable private communication, particularly Virtual Private Networks. That interest carries consequences well beyond “age assurance.”

January 2026 briefing we obtained from the European Parliamentary Research Service traces a sharp rise in VPN use following the introduction of mandatory age checks.

The report notes “a significant surge in the number of virtual private networks (VPNs) used to bypass online age verification methods in countries where these have been put in place by law,” placing that trend within a broader policy environment where “protection of children online is high on the political agenda.”

Australia’s experience fits this trajectory. As age gates tighten, individuals reach for tools that reduce exposure to monitoring and profiling. VPNs are the first port of call in that response because they are widely available, easy to use, and designed to limit third-party visibility into online activity.

The EPRS briefing offers a clear description of what these tools do. “A virtual private network (VPN) is a digital technology designed to establish a secure and encrypted connection between a user’s device and the internet.”

It explains that VPNs hide IP addresses and route traffic through remote servers in order to “protect online communications from interception and surveillance.” These are civil liberties functions, not fringe behaviors, and they have long been treated as legitimate safeguards in democratic societies.

Keep reading

Nine Bureaucracies Walk Into Your Browser and Ask for ID

By the time you’re reading this, there’s a decent chance that somewhere, quietly and with a great deal of bureaucratic back-patting, someone is trying to figure out exactly how old you are. And not because they’re planning a surprise party.

Not because you asked them to. But because the nine horsemen of the regulatory apocalypse have decided that the future of a “safe” internet depends on everyone flashing their ID like they’re trying to get into an especially dull nightclub.

This is the nightmare of “age assurance,” a term so bloodlessly corporate you can practically hear it sighing into its own PowerPoint.

This is a sprawling, gelatinous lump of biometric estimation, document scans, and AI-ified guesswork, stitched together into one big global initiative under the cheery-sounding Global Online Safety Regulators Network, or GOSRN. Catchy.

Formed in 2022, presumably after someone at Ofcom had an especially boring lunch break, GOSRN now boasts nine national regulators, including the UK, France, Australia, and that well-known digital superpower, Fiji, who have come together to harmonize policies on how to tell whether someone is too young to look at TikTok for adults.

The group is currently chaired by Ireland’s Coimisiún na Meán.

This month, this merry band of regulators released a “Position Statement on Age Assurance and Online Safety Regulation.”

We obtained a copy of the document for you here.

Inside this gem of a document is a plan to push shared age-verification principles across borders, including support for biometric analysis, official ID checks, and the general dismantling of anonymity for the greater good of child protection.

Keep reading

Government-Controlled Digital ID is Not the Optional Convenience It Is Being Sold As

The UK government has pledged to introduce a digital ID system for all UK citizens and legal residents by the end of the current Parliament (so no later than 2029). The integration of digital ID into government services, though already under way, has hitherto been largely voluntary. However, it is becoming steadily less optional, as the government has said it will now be required as a precondition for work in the UK, and a version of it (GOV.UK One Login) is already being imposed unilaterally upon company directors throughout the UK.

Chief Secretary to the Prime Minister Darren Jones has suggested in a recent interview (19/11) that digital ID is completely optional and will simply make government services more accessible and convenient. But this is a rather disingenuous sales pitch. On the one hand, Starmer himself insists that digital ID will be required as a precondition to work legally in the UK; on the other hand, like any new technology, there will be a transition period, but voluntariness is unlikely to last forever. 

Keep reading

Congress Revives Kids Off Social Media Act, a “Child Safety” Bill Poised to Expand Online Digital ID Checks

Congress is once again positioning itself as the protector of children online, reviving the Kids Off Social Media Act (KOSMA) in a new round of hearings on technology and youth.

We obtained a copy of the bill for you here.

Introduced by Senators Ted Cruz and Brian Schatz, the bill surfaced again during a Senate Commerce Committee session examining the effects of screen time and social media on mental health.

Cruz warned that a “phone-based childhood” has left many kids “lost in the virtual world,” pointing to studies linking heavy screen use to anxiety, depression, and social isolation.

KOSMA’s key provisions would ban social media accounts for anyone under 13 and restrict recommendation algorithms for teens aged 13 to 17.

Pushers of the plan say it would “empower parents” and “hold Big Tech accountable,” but in reality, it shifts control away from families and toward corporate compliance systems.

The bill’s structure leaves companies legally responsible for determining users’ ages, even though it does not directly require age verification.

The legal wording is crucial. KOSMA compels platforms to delete accounts if they have “actual knowledge” or what can be “fairly implied” as knowledge that a user is under 13.

That open-ended standard puts enormous pressure on companies to avoid errors.

The most predictable outcome is a move toward mandatory age verification systems, where users must confirm their age or identity to access social platforms. In effect, KOSMA would link access to everyday online life to a form of digital ID.

That system would not only affect children. It would reach everyone. To prove compliance, companies could require users to submit documents such as driver’s licenses, facial scans, or other biometric data.

The infrastructure needed to verify ages at scale looks almost identical to the infrastructure needed for national digital identity systems. Once built, those systems rarely stay limited to a single use. A measure framed as protecting kids could easily become the foundation for a broader identity-based internet.

Keep reading

TSA Proposes MyTSA PreCheck Digital ID, Integrating Biometrics and Federal Databases

The Transportation Security Administration is reshaping how it verifies the identities of US air travelers, proposing a major update that merges biometric data, mobile credentials, and government authentication platforms into one expanded framework.

Published in the Federal Register, the notice outlines a new form of digital identification, the MyTSA PreCheck ID, which would extend the agency’s existing PreCheck program into a mobile environment requiring more detailed data from participants.

Under the plan, travelers who want to activate the new digital ID on their phones would have to provide additional biographic and biometric details such as fingerprints and facial imagery, along with the information already collected for PreCheck enrollment.

The proposal appears alongside TSA’s recently finalized ConfirmID program, a separate fee-based service designed for passengers who arrive at checkpoints without a REAL ID or another approved credential.

Keep reading