40 State Attorneys General Want To Tie Online Access to ID

A bloc of 40 state and territorial attorneys general is urging Congress to adopt the Senate’s version of the controversial Kids Online Safety Act, positioning it as the stronger regulatory instrument and rejecting the House companion as insufficient.

The Act would kill online anonymity and tie online activity and speech to a real-world identity.

Acting through the National Association of Attorneys General, the coalition sent a letter to congressional leadership endorsing S. 1748 and opposing H.R. 6484.

We obtained a copy of the letter for you here.

Their request centers on structural differences between the bills. The Senate proposal would create a federally enforceable “Duty of Care” requiring covered platforms to mitigate defined harms to minors.

Enforcement authority would rest with the Federal Trade Commission, which could investigate and sue companies that fail to prevent minors from encountering content deemed to cause “harm to minors.”

That framework would require regulators to evaluate internal content moderation systems, recommendation algorithms, and safety controls.

S. 1748 also directs the Secretary of Commerce, the FTC, and the Federal Communications Commission to study “the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.”

This language moves beyond platform-level age gates and toward infrastructure embedded directly into hardware or operating systems.

Age verification at that layer would not function without some form of credentialing. Device-level verification would likely depend on digital identity checks tied to government-issued identification, third-party age verification vendors, or persistent account authentication systems.

That means users could be required to submit identifying information before accessing broad categories of lawful online speech. Anonymous browsing depends on the ability to access content without linking identity credentials to activity.

Keep reading

“Kids Off Social Media Act” Opens the Door to Digital ID by Default

Congress is once again stepping into the role of digital caretaker, this time through the Kids Off Social Media Act, with a proposal from Rep. Anna Paulina Luna that seeks to impose federal rules on how young people interact with the world.

The house companion bill (to go along with the senate bill) attempts to set national limits on who can hold social media accounts, how platforms may structure their systems, and what kinds of data they are allowed to use when dealing with children and teenagers.

Framed as a response to growing parental concern, the legislation reflects a broader push to regulate online spaces through age-based access and design mandates rather than direct content rules.

The proposal promises restraint while quietly expanding Washington’s reach into the architecture of online speech. Backers of the bill will insist it targets corporate behavior rather than expression itself. The bill’s mechanics tell a more complicated story.

The bill is the result of a brief but telling legislative evolution. Early versions circulated in 2024 were framed as extensions of existing child privacy rules rather than participation bans. Those drafts focused on limiting data collection, restricting targeted advertising to minors, and discouraging algorithmic amplification, while avoiding hard access restrictions or explicit age enforcement mandates.

That posture shifted as the bill gained bipartisan backing. By late 2024, lawmakers increasingly treated social media as an inherently unsafe environment for children rather than a service in need of reform. When the bill was reintroduced in January 2025, it reflected that change. The new version imposed a categorical ban on accounts for users under 13, restricted recommendation systems for users under 17, and strengthened enforcement through the Federal Trade Commission and state attorneys general, with Senate sponsorship led by Ted Cruz and Brian Schatz.

Keep reading

EU Law Could Extend Scanning of Private Messages Until 2027

The European Parliament is considering another extension of Chat Control 1.0, the “temporary” exemption that allows communications providers to scan private messages (under the premise of preventing child abuse) despite the protections of the EU’s ePrivacy Directive.

draft report presented by rapporteur Birgit Sippel (S&D) would prolong the derogation until April 3, 2027.

At first glance, the proposal appears to roll back some of the most controversial elements of Chat Control. Text message scanning and automated analysis of previously unknown images would be explicitly excluded. Supporters have framed this as a narrowing of scope.

However, the core mechanism of Chat Control remains untouched.

The draft continues to permit mass hash scanning of private communications for so-called “known” material.

According to former MEP and digital rights activist Patrick Breyer, approximately 99 percent of all reports generated under Chat Control 1.0 originate from hash-based detection.

Almost all of those reports come from a single company, Meta, which already limits its scanning to known material only. Under the new proposal, Meta’s practices would remain fully authorized.

As a result, the draft would not meaningfully reduce the volume, scope, or nature of surveillance. The machinery keeps running, with a few of its most visibly controversial attachments removed.

Hash scanning is often portrayed as precise and reliable. The evidence points in the opposite direction.

First, the technology is incapable of understanding context or intent. Hash databases are largely built using US legal definitions of illegality, which do not map cleanly onto the criminal law of EU Member States.

The German Federal Criminal Police Office (BKA) reports that close to half of all chat control reports are criminally irrelevant.

Each false positive still requires assessment, documentation, and follow-up. Investigators are forced to triage noise rather than pursue complex cases involving production, coercion, and organized abuse.

The strategic weakness is compounded by a simple reality. Offenders adapt. As more services adopt end-to-end encryption, abusers migrate accordingly. Since 2022, the number of chat-based reports sent to police has fallen by roughly 50 percent, not because abuse has declined, but because scanning has become easier to evade.

“Both children and adults deserve a paradigm shift in online child protection, not token measures,” Breyer said in a statement to Reclaim The Net.

“Whether looking for ‘known’ or ‘unknown’ content, the principle remains: the post office cannot simply open and scan every letter at random. Searching only for known images fails to stop ongoing abuse or rescue victims.”

Keep reading

Oklahoma Governor Wants Voters To Revisit Medical Marijuana Legalization Law And ‘Shut It Down’

The Republican governor of Oklahoma says he wants voters to revisit the state’s medical marijuana law and “shut it down,” arguing that “liberal activists” conned the state and “opened up Pandora’s box’ with legalization.

During his State of the State address on Monday, Gov. Kevin Stitt (R) said his “top priority has always been keeping Oklahoma safe,” and one of the “greatest threats to public safety is the out-of-control marijuana industry.”

“When Oklahomans voted to legalize medical marijuana in 2018, we were sold a bill of goods,” he said. “Out of state liberal activists preyed on the compassionate nature of Oklahomans. Then, it opened up Pandora’s box.”

The governor complained that the state has “more dispensaries than we do pharmacies,” adding that marijuana retailers “hide an industry that enables cartel activity, human trafficking, and foreign influence in our state.”

While regulators and law enforcement have “done incredible work to hold back the tide of illegal activity,” Stitt said, the industry is “plagued by foreign criminal interests and bad actors, making it nearly impossible to rein in.”

“We can’t put a band-aid on a broken bone,” he said. “Knowing what we know, it’s time to let Oklahomans bring safety and sanity back to their neighborhoods. Send the marijuana issue back to the vote of the people and shut it down.”

Keep reading

Alabama Lawmakers Pass Bill To Increase Penalties For Smoking Marijuana In A Car Where A Child Is Present

The Alabama House of Representatives Thursday passed a bill that prohibits smoking or vaping marijuana in a car with children.

HB 72, sponsored by Rep. Patrick Sellers, D-Pleasant Grove, would make it a Class A misdemeanor, punishable by up to a year in jail,  for those who smoke marijuana in a car with a child under 19.

The bill passed 77-2 after an unusual debate largely limited to the 29 Democrats in the 105-member chamber over potential unintended consequences. Most Democrats abstained from the vote. Four voted in favor; Reps. Mary Moore, D-Birmingham and TaShina Morris, D-Montgomery, voted against the bill.

“It’s about protecting the children, protecting every single child in the state of Alabama,” Sellers said after the meeting. “And that’s the motivation behind making sure that every child has the 100 percent ability to learn in the best environment that they can and keep them safe.”

Under the bill, individuals who are found to have smoked marijuana in the car with a child would be required to go through an educational program conducted by the Department of Public Health and would be reported by law enforcement to local county human resources departments.

Several Democrats who spoke on the measure cited the toll that harsh drug laws had taken on minority communities.

“It goes back to the heart of criminalization of marijuana in certain communities,” Rep. Juandalynn Givan, D-Birmingham, said after the meeting. “And those are communities that are communities typical of people of color.”

Givan also said House Democrats had wanted to work with Sellers on the bill.

“The Democratic Party, on several attempts, said that this is a bill that we might need to sit down and curate,” she said. “I’m not sure why the sponsor of the bill did not do that.”

Morris raised concerns about the bill’s definition of a child during debate.

“So we’re making a parent responsible for an 18-year-old who has a marijuana smell on them,” she said. “We know at the ages of 16 and 17, especially with the influence of walking outside and going different places, that they are smoking, maybe without the parent even knowing.”

Rep. Rolanda Hollis, D-Birmingham, said during debate that parents don’t know everything that their child does.

“As a parent you may not know, and here I don’t know if the counselor or the principal can call you in to say ‘Hey this is what we smelled on your kid’s jacket, how are we gonna handle this?’ But instead you got me going to a class for something I don’t even know about,” she said.

When asked after the meeting about Morris’ concerns about the bill’s language regarding age, Sellers said parents should “stop making excuses” for their children.

“You know whether or not your child is smoking marijuana. If someone lives in your house, you know they’re smoking marijuana because you can smell it. It’s a distinct smell,” he said.

Keep reading

Congress Revives Kids Off Social Media Act, a “Child Safety” Bill Poised to Expand Online Digital ID Checks

Congress is once again positioning itself as the protector of children online, reviving the Kids Off Social Media Act (KOSMA) in a new round of hearings on technology and youth.

We obtained a copy of the bill for you here.

Introduced by Senators Ted Cruz and Brian Schatz, the bill surfaced again during a Senate Commerce Committee session examining the effects of screen time and social media on mental health.

Cruz warned that a “phone-based childhood” has left many kids “lost in the virtual world,” pointing to studies linking heavy screen use to anxiety, depression, and social isolation.

KOSMA’s key provisions would ban social media accounts for anyone under 13 and restrict recommendation algorithms for teens aged 13 to 17.

Pushers of the plan say it would “empower parents” and “hold Big Tech accountable,” but in reality, it shifts control away from families and toward corporate compliance systems.

The bill’s structure leaves companies legally responsible for determining users’ ages, even though it does not directly require age verification.

The legal wording is crucial. KOSMA compels platforms to delete accounts if they have “actual knowledge” or what can be “fairly implied” as knowledge that a user is under 13.

That open-ended standard puts enormous pressure on companies to avoid errors.

The most predictable outcome is a move toward mandatory age verification systems, where users must confirm their age or identity to access social platforms. In effect, KOSMA would link access to everyday online life to a form of digital ID.

That system would not only affect children. It would reach everyone. To prove compliance, companies could require users to submit documents such as driver’s licenses, facial scans, or other biometric data.

The infrastructure needed to verify ages at scale looks almost identical to the infrastructure needed for national digital identity systems. Once built, those systems rarely stay limited to a single use. A measure framed as protecting kids could easily become the foundation for a broader identity-based internet.

Keep reading

Florida’s “App Store Accountability Act” Would Deputize Big Tech to Verify User IDs for App Access

In Florida, Senator Alexis Calatayud has introduced a proposal that could quietly reshape how millions of Americans experience the digital world.

The App Store Accountability Act (SB 1722), presented as a safeguard for children, would require every app marketplace to identify users by age category, verify that data through “commercially available methods,” and secure recurring parental consent whenever an app’s policies change.

The legislation is ambitious. If enacted, it would take effect in July 2027, with enforcement beginning the following year.

Each violation could carry penalties of up to $7,500, along with injunctions and attorney fees.

On its surface, this is a regulatory measure aimed at strengthening parental oversight and protecting minors from online harms. Yet it hits up against a larger philosophical and rights struggle.

For much of modern political thought, the relationship between authority and liberty has revolved around who decides what constitutes protection. Florida’s proposal situates that question in the hands of private corporations. The bill effectively deputizes Big Tech app store operators, such as Apple and Google, as arbiters of digital identity, compelling them to verify user ages and manage parental permissions across every platform.

Millions of Floridians could be required to submit identifying details or official documents simply to access or update apps. This process, while justified as a measure of security, reintroduces the age-old tension between the protective role of the state and the autonomy of the citizen.

By making identity verification the gateway to digital access, the law risks creating an infrastructure in which surveillance becomes a condition of participation. It is a move from voluntary oversight to systemic authentication, merging the roles of government and corporation in a single mechanism of control.

The proposal may collide with long-established constitutional principles. One of the objections lies in the concept of prior restraint. By conditioning minors’ ability to download or continue using apps on verified Big Tech platforms, the bill requires permission before access, effectively placing all expressive content behind a regulatory gate.

Apps today are not mere entertainment; they are conduits of news, art, religion, and political discourse. Restricting that access risks transforming a parental safeguard into a systemic filter for speech.

The burden falls most heavily on minors, whose First Amendment protections are often ignored in public debate.

Keep reading

Democratic Senators Urge Tech Platforms to Restrict AI Images, Including Altered Clothing and Body-Shape Edits

Democratic senators are broadening the definition of what counts as restricted online content, moving from earlier efforts focused on explicit deepfakes to a new campaign against what they call “non-nude sexualized” material.

The new language dramatically expands the category of what can be censored, reaching beyond pornography or criminal exploitation to include images with altered clothing, edited body shapes, or suggestive visual effects.

Senator Lisa Blunt Rochester of Delaware led the group of seven Democrats who signed a letter to Alphabet, Meta, Reddit, Snap, TikTok, and X.

We obtained a copy of the letter for you here.

The signatories — Tammy Baldwin, Richard Blumenthal, Kirsten Gillibrand, Mark Kelly, Ben Ray Luján, Brian Schatz, and Adam Schiff — are asking for records that define how each company classifies and removes this type of content, as well as any internal documents or moderator guidance about “virtual undressing” and similar AI edits.

“We are particularly alarmed by reports of users exploiting generative AI tools to produce sexualized ‘bikini’ or ‘non-nude’ images of individuals without their consent and distributing them on platforms including X and others,” the senators wrote.

“These fake yet hyper-realistic images are often generated without the knowledge or consent of the individuals depicted, raising serious concerns about harassment, privacy violations, and user safety.”

Their argument rests on reports describing AI tools that can transform photos of clothed women into revealing deepfakes or fabricate images of sexualized poses. The senators describe this as evidence of a growing “crisis of image-based abuse” that undermines trust and safety online.

But the language of the letter goes further than earlier initiatives that targeted explicit content. It introduces a much wider standard where mere suggestion or aesthetic change could qualify as “sexualized.” The call to prohibit “altered clothing” or “body-shape edits” effectively merges real abuse prevention with subjective judgments about appearance.

Keep reading

The Great Grok Bikini Scandal is just Digital ID via the Backdoor.

wo days ago, the British government announced a U-turn on their proposed digital identity, and that the much-anticipated “BritCard” would no longer be mandatory to work in the UK.

This was welcomed as a victory by both fake anti-establishment types whose job is to Pied Piper genuine opposition, and some real resistance who should know better.

The reality is that reports of the death of digital identity have been greatly exaggerated. All they said was that it would no longer be mandatory.

Having a bank account, a cellphone, or an internet connection is not mandatory, but try functioning in this world without them.

As we said on X, anybody who understands governments or human nature knew any digital ID was likely never going to be gun-to-your-head, risking-prison-time mandatory.

All it has to be is a little bit faster and/or a little bit cheaper.

Saving you half an hour when submitting your tax return, faster progress through customs, lower “processing fees” for passport or driver’s license applications.

An hour of extra time and 50 pounds saved per year will do more coercion than barbed wire and billy clubs ever could.

Running alongside this is the manufactured drama around Grok’s generation of images of bikini-clad public figures, something which it suited the press and punditry class to work up into “sexual assault” and “pornography” whilst imploring us all to “think of the children!”

Inside a week, X has changed its policy, and Sir Keir Starmer’s government has promised a swift resolution of the issue using legislation that was (conveniently) passed last year but has yet to be enforced (more on that in the next few days).

This issue became a “problem”, had an hysterical “reaction” and was supplied a ready-made “solution” all inside two weeks. A swifter procession of the Hegelian dialectic would be hard to find.

So, we have the reported demise of mandatory digital identity occurring alongside the rise of the “threat” of AI “deepfakes”.

Nobody in the mainstream press has actually linked these stories together, but the connection is as obvious as the next step is inevitable.

This next step is the UK introducing its own version of the Australian “social media ban” for under-16s. In effect, age-gating all online interaction on major platforms and ending online anonymity.

Keep reading

Democrats Demand Apple and Google Ban X From App Stores

Apple and Google are under mounting political pressure from Democrats over X’s AI chatbot, Grok, after lawmakers accused the platform of producing images of women and allegedly minors in bikinis.

While the outrage targets X specifically, the ability to generate such material is not unique to one platform. Similar image manipulation and synthetic content creation can be found across nearly every major AI system available today.

Yet, the letter sent to Apple CEO Tim Cook and Google CEO Sundar Pichai by Senators Ron Wyden, Ben Ray Luján, and Ed Markey only asked the tech giants only about X and demanded that the companies remove X from their app stores entirely.

X is used by around 557 million users.

We obtained a copy of the letter for you here.

The lawmakers wrote that “X’s generation of these harmful and likely illegal depictions of women and children has shown complete disregard for your stores’ distribution terms.”

They pointed to Google’s developer rules, which prohibit apps that facilitate “the exploitation or abuse of children,” and Apple’s policy against apps that are “offensive” or “just plain creepy.”

Ignoring the First Amendment completely, “Apple and Google must remove these apps from the app stores until X’s policy violations are addressed,” the letter states.

Dozens of generative systems, including open-source image models that can’t be controlled or limited by anyone, can produce the same kinds of bikini images with minimal prompting.

The senators cited prior examples of Apple and Google removing apps such as ICEBlock and Red Dot under government pressure.

“Unlike Grok’s sickening content generation, these apps were not creating or hosting harmful or illegal content, and yet, based entirely on the Administration’s claims that they posed a risk to immigration enforcers, you removed them from your stores,” the letter stated.

Keep reading