Congress Revives Kids Off Social Media Act, a “Child Safety” Bill Poised to Expand Online Digital ID Checks

Congress is once again positioning itself as the protector of children online, reviving the Kids Off Social Media Act (KOSMA) in a new round of hearings on technology and youth.

We obtained a copy of the bill for you here.

Introduced by Senators Ted Cruz and Brian Schatz, the bill surfaced again during a Senate Commerce Committee session examining the effects of screen time and social media on mental health.

Cruz warned that a “phone-based childhood” has left many kids “lost in the virtual world,” pointing to studies linking heavy screen use to anxiety, depression, and social isolation.

KOSMA’s key provisions would ban social media accounts for anyone under 13 and restrict recommendation algorithms for teens aged 13 to 17.

Pushers of the plan say it would “empower parents” and “hold Big Tech accountable,” but in reality, it shifts control away from families and toward corporate compliance systems.

The bill’s structure leaves companies legally responsible for determining users’ ages, even though it does not directly require age verification.

The legal wording is crucial. KOSMA compels platforms to delete accounts if they have “actual knowledge” or what can be “fairly implied” as knowledge that a user is under 13.

That open-ended standard puts enormous pressure on companies to avoid errors.

The most predictable outcome is a move toward mandatory age verification systems, where users must confirm their age or identity to access social platforms. In effect, KOSMA would link access to everyday online life to a form of digital ID.

That system would not only affect children. It would reach everyone. To prove compliance, companies could require users to submit documents such as driver’s licenses, facial scans, or other biometric data.

The infrastructure needed to verify ages at scale looks almost identical to the infrastructure needed for national digital identity systems. Once built, those systems rarely stay limited to a single use. A measure framed as protecting kids could easily become the foundation for a broader identity-based internet.

Keep reading

Florida’s “App Store Accountability Act” Would Deputize Big Tech to Verify User IDs for App Access

In Florida, Senator Alexis Calatayud has introduced a proposal that could quietly reshape how millions of Americans experience the digital world.

The App Store Accountability Act (SB 1722), presented as a safeguard for children, would require every app marketplace to identify users by age category, verify that data through “commercially available methods,” and secure recurring parental consent whenever an app’s policies change.

The legislation is ambitious. If enacted, it would take effect in July 2027, with enforcement beginning the following year.

Each violation could carry penalties of up to $7,500, along with injunctions and attorney fees.

On its surface, this is a regulatory measure aimed at strengthening parental oversight and protecting minors from online harms. Yet it hits up against a larger philosophical and rights struggle.

For much of modern political thought, the relationship between authority and liberty has revolved around who decides what constitutes protection. Florida’s proposal situates that question in the hands of private corporations. The bill effectively deputizes Big Tech app store operators, such as Apple and Google, as arbiters of digital identity, compelling them to verify user ages and manage parental permissions across every platform.

Millions of Floridians could be required to submit identifying details or official documents simply to access or update apps. This process, while justified as a measure of security, reintroduces the age-old tension between the protective role of the state and the autonomy of the citizen.

By making identity verification the gateway to digital access, the law risks creating an infrastructure in which surveillance becomes a condition of participation. It is a move from voluntary oversight to systemic authentication, merging the roles of government and corporation in a single mechanism of control.

The proposal may collide with long-established constitutional principles. One of the objections lies in the concept of prior restraint. By conditioning minors’ ability to download or continue using apps on verified Big Tech platforms, the bill requires permission before access, effectively placing all expressive content behind a regulatory gate.

Apps today are not mere entertainment; they are conduits of news, art, religion, and political discourse. Restricting that access risks transforming a parental safeguard into a systemic filter for speech.

The burden falls most heavily on minors, whose First Amendment protections are often ignored in public debate.

Keep reading

Democratic Senators Urge Tech Platforms to Restrict AI Images, Including Altered Clothing and Body-Shape Edits

Democratic senators are broadening the definition of what counts as restricted online content, moving from earlier efforts focused on explicit deepfakes to a new campaign against what they call “non-nude sexualized” material.

The new language dramatically expands the category of what can be censored, reaching beyond pornography or criminal exploitation to include images with altered clothing, edited body shapes, or suggestive visual effects.

Senator Lisa Blunt Rochester of Delaware led the group of seven Democrats who signed a letter to Alphabet, Meta, Reddit, Snap, TikTok, and X.

We obtained a copy of the letter for you here.

The signatories — Tammy Baldwin, Richard Blumenthal, Kirsten Gillibrand, Mark Kelly, Ben Ray Luján, Brian Schatz, and Adam Schiff — are asking for records that define how each company classifies and removes this type of content, as well as any internal documents or moderator guidance about “virtual undressing” and similar AI edits.

“We are particularly alarmed by reports of users exploiting generative AI tools to produce sexualized ‘bikini’ or ‘non-nude’ images of individuals without their consent and distributing them on platforms including X and others,” the senators wrote.

“These fake yet hyper-realistic images are often generated without the knowledge or consent of the individuals depicted, raising serious concerns about harassment, privacy violations, and user safety.”

Their argument rests on reports describing AI tools that can transform photos of clothed women into revealing deepfakes or fabricate images of sexualized poses. The senators describe this as evidence of a growing “crisis of image-based abuse” that undermines trust and safety online.

But the language of the letter goes further than earlier initiatives that targeted explicit content. It introduces a much wider standard where mere suggestion or aesthetic change could qualify as “sexualized.” The call to prohibit “altered clothing” or “body-shape edits” effectively merges real abuse prevention with subjective judgments about appearance.

Keep reading

The Great Grok Bikini Scandal is just Digital ID via the Backdoor.

wo days ago, the British government announced a U-turn on their proposed digital identity, and that the much-anticipated “BritCard” would no longer be mandatory to work in the UK.

This was welcomed as a victory by both fake anti-establishment types whose job is to Pied Piper genuine opposition, and some real resistance who should know better.

The reality is that reports of the death of digital identity have been greatly exaggerated. All they said was that it would no longer be mandatory.

Having a bank account, a cellphone, or an internet connection is not mandatory, but try functioning in this world without them.

As we said on X, anybody who understands governments or human nature knew any digital ID was likely never going to be gun-to-your-head, risking-prison-time mandatory.

All it has to be is a little bit faster and/or a little bit cheaper.

Saving you half an hour when submitting your tax return, faster progress through customs, lower “processing fees” for passport or driver’s license applications.

An hour of extra time and 50 pounds saved per year will do more coercion than barbed wire and billy clubs ever could.

Running alongside this is the manufactured drama around Grok’s generation of images of bikini-clad public figures, something which it suited the press and punditry class to work up into “sexual assault” and “pornography” whilst imploring us all to “think of the children!”

Inside a week, X has changed its policy, and Sir Keir Starmer’s government has promised a swift resolution of the issue using legislation that was (conveniently) passed last year but has yet to be enforced (more on that in the next few days).

This issue became a “problem”, had an hysterical “reaction” and was supplied a ready-made “solution” all inside two weeks. A swifter procession of the Hegelian dialectic would be hard to find.

So, we have the reported demise of mandatory digital identity occurring alongside the rise of the “threat” of AI “deepfakes”.

Nobody in the mainstream press has actually linked these stories together, but the connection is as obvious as the next step is inevitable.

This next step is the UK introducing its own version of the Australian “social media ban” for under-16s. In effect, age-gating all online interaction on major platforms and ending online anonymity.

Keep reading

Democrats Demand Apple and Google Ban X From App Stores

Apple and Google are under mounting political pressure from Democrats over X’s AI chatbot, Grok, after lawmakers accused the platform of producing images of women and allegedly minors in bikinis.

While the outrage targets X specifically, the ability to generate such material is not unique to one platform. Similar image manipulation and synthetic content creation can be found across nearly every major AI system available today.

Yet, the letter sent to Apple CEO Tim Cook and Google CEO Sundar Pichai by Senators Ron Wyden, Ben Ray Luján, and Ed Markey only asked the tech giants only about X and demanded that the companies remove X from their app stores entirely.

X is used by around 557 million users.

We obtained a copy of the letter for you here.

The lawmakers wrote that “X’s generation of these harmful and likely illegal depictions of women and children has shown complete disregard for your stores’ distribution terms.”

They pointed to Google’s developer rules, which prohibit apps that facilitate “the exploitation or abuse of children,” and Apple’s policy against apps that are “offensive” or “just plain creepy.”

Ignoring the First Amendment completely, “Apple and Google must remove these apps from the app stores until X’s policy violations are addressed,” the letter states.

Dozens of generative systems, including open-source image models that can’t be controlled or limited by anyone, can produce the same kinds of bikini images with minimal prompting.

The senators cited prior examples of Apple and Google removing apps such as ICEBlock and Red Dot under government pressure.

“Unlike Grok’s sickening content generation, these apps were not creating or hosting harmful or illegal content, and yet, based entirely on the Administration’s claims that they posed a risk to immigration enforcers, you removed them from your stores,” the letter stated.

Keep reading

Here’s PROOF That UK’s X Ban Has NOTHING To Do With Protecting Children

As UK authorities ramp up their assault on free speech, a viral post shared by Elon Musk exposes the glaring hypocrisy in the government’s “protect the children” narrative. Data from the The National Society for the Prevention of Cruelty to Children (NSPCC) and police forces reveals Snapchat as the epicenter of online child sexual grooming, dwarfing X’s minimal involvement.

This comes amid Keir Starmer’s escalating war on X, where community notes routinely dismantle government spin, and unfiltered truth is delivered to the masses. If safeguarding kids was the real goal, it would be the likes of Snapchat in the crosshairs, given that thousands of real world child sexual offences have originated from its use.

Instead they’re going after X because, they claim, it provides the ability to make fake images of anyone in a bikini using the inbuilt Grok Ai image generator.

Based on 2025 NSPCC and UK police data, Snapchat is linked to 40-48% of identified child grooming cases, Instagram around 9-11%, Facebook 7-9%, WhatsApp 9%, and X under 2%.

These numbers align with NSPCC’s alarming report on the surge in online grooming. The charity recorded over 7,000 Sexual Communication with a Child offences in 2023/24—an 89% spike since 2017/18.

Keep reading

UK to bring into force law to tackle Grok AI deepfakes this week

The UK will bring into force a law which will make it illegal to create non-consensual intimate images, following widespread concerns over Elon Musk’s Grok AI chatbot.

Technology Secretary Liz Kendall said the government would also seek to make it illegal for companies to supply the tools designed to create such images.

Speaking to the Commons, Kendall said AI-generated pictures of women and children in states of undress, created without a person’s consent, were not “harmless images” but “weapons of abuse”.

The BBC has approached X for comment. It previously said: “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

It comes hours after Ofcom announced it was launching an investigation into X over “deeply concerning reports” about Grok altering images of people.

If found to have broken the law, Ofcom can potentially issue X with a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater.

And if X does not comply, Ofcom can seek a court order to force internet service providers to block access to the site in the UK altogether.

In a statement, Kendall urged the regulator not to take “months and months” to conclude its investigation, and demanded it set out a timeline “as soon as possible”.

It is currently illegal to share deepfakes of adults in the UK, but legislation in the Data (Use and Access) Act which would make it a criminal offence to create or request them has not been enforced until now, despite passing in June 2025.

Last week, campaigners accused the government of dragging its heels on implementing that law.

“Today I can announce to the House that this offence will be brought into force this week,” Kendall told MPs.

In addition to the Data Act, Kendall said she would also make it a “priority offence” in the Online Safety Act.

“The content which has circulated on X is vile. It’s not just an affront to decent society, it is illegal,” she said.

“Let me be crystal clear – under the Online Safety Act, sharing intimate images of people without their consent, or threatening to share them, including pictures of people in their underwear, is a criminal offence for individuals and for platforms.

“This means individuals are committing a criminal offence if they create or seek to create such content including on X, and anyone who does this should expect to face the full extent of the law.”

Keep reading

Starmer’s Looking for an Excuse to Ban X

Keir Starmer has signaled he is prepared to back regulatory action that could ultimately result in X being blocked in the UK.

The Prime Minister of the United Kingdom has suggested, more or less, that because Elon Musk’s AI chatbot Grok has been generating images of women and minors in bikinis, he’ll support going as far as hitting the kill switch and blocking access to the entire platform.

“The situation is disgraceful and disgusting,” Starmer said on Greatest Hits Radio; the station best known for playing ABBA and now, apparently, for frontline authoritarian tech policy announcements.

“X has got to get a grip of this, and Ofcom has our full support to take action…I’ve asked for all options to be on the table.”

“All options,” for those who don’t speak fluent Whitehall euphemism, now apparently includes turning Britain’s digital infrastructure into a sort of beige North Korea, where a bunch of government bureaucrats, armed with nothing but Online Safety Act censorship law and the panic of a 90s tabloid, get to decide which speech the public is allowed to see.

Now, you might be wondering: Surely he’s bluffing? Oh no. According to Downing Street sources, they’re quite serious.

And they’ve even named the mechanism: the Online Safety Act; that cheery little piece of legislation that sounds like it’s going to help grandmothers avoid email scams, but actually gives Ofcom the power to block platforms, fine them into oblivion, or ban them entirely if they don’t comply with government censorship orders.

Killing X isn’t a new idea. You may remember Morgan McSweeney, Keir Starmer’s Chief of Staff, founded the Centre for Countering Digital Hate. In 2024, leaks revealed that the group was trying to “Kill Musk’s Twitter.”

Keep reading

Virginia to Enforce Verification Law for Social Media on January 1, 2026, Despite Free Speech Concerns

Virginia is preparing to enforce a new online regulation that will curtail how minors access social media, setting up a direct clash between state lawmakers and advocates for digital free expression.

Beginning January 1, 2026, a law known as Senate Bill 854 will compel social media companies to confirm the ages of all users through “commercially reasonable methods” and to restrict anyone under sixteen to one hour of use per platform per day.

We obtained a copy of the bill for you here.

Parents will have the option to override those limits through what the statute calls “verifiable parental consent.”

The measure is written into the state’s Consumer Data Protection Act, and it bars companies from using any information gathered for age checks for any other purpose.

Lawmakers from both parties rallied behind the bill, portraying it as a way to reduce what they described as addictive and harmful online habits among young people.

Delegate Wendell Walker argued that social media “is almost like a drug addiction,” while Delegate Sam Rasoul said that “people are concerned about the addiction of screen time” and accused companies of building algorithms that “keep us more and more addicted.”

Enforcement authority falls to the Office of the Attorney General, which may seek injunctions or impose civil fines reaching $7,500 per violation for noncompliance.

But this policy, framed as a health measure, has triggered strong constitutional objections from the technology industry and free speech advocates.

The trade association NetChoice filed a federal lawsuit (NetChoice v. Miyares) in November 2025, arguing that Virginia’s statute unlawfully restricts access to lawful speech online.

We obtained a copy of the lawsuit for you here.

The complaint draws parallels to earlier moral panics over books, comic strips, rock music, and video games, warning that SB 854 “does not enforce parental authority; it imposes governmental authority, subject only to a parental veto.”

Keep reading

UK Lawmakers Propose Mandatory On-Device Surveillance and VPN Age Verification

Lawmakers in the United Kingdom are proposing amendments to the Children’s Wellbeing and Schools Bill that would require nearly all smartphones and tablets to include built-in, unremovable surveillance software.

The proposal appears under a section titled “Action to promote the well-being of children by combating child sexual abuse material (CSAM).”

We obtained a copy of the proposed amendments for you here.

The amendment text specifies that any “relevant device supplied for use in the UK must have installed tamper-proof system software which is highly effective at preventing the recording, transmitting (by any means, including livestreaming) and viewing of CSAM using that device.”

It further defines “relevant devices” as “smartphones or tablet computers which are either internet-connectable products or network-connectable products for the purposes of section 5 of the Product Security and Telecommunications Infrastructure Act 2022.”

Under this clause, manufacturers, importers, and distributors would be legally required to ensure that every internet-connected phone or tablet they sell in the UK meets this “CSAM requirement.”

Enforcement would occur “as if the CSAM requirement was a security requirement for the purposes of Part 1 of the Product Security and Telecommunications Infrastructure Act 2022.”

In practical terms, the only way for such software to “prevent the recording, transmitting (by any means, including livestreaming) and viewing of CSAM” would be for devices to continuously scan and analyze all photos, videos, and livestreams handled by the device.

That process would have to take place directly on users’ phones and tablets, examining both personal and encrypted material to determine whether any of it might be considered illegal content. Although the measure is presented as a child-safety protection, its operation would create a system of constant client-side scanning.

This means the software would inspect private communications, media, and files on personal devices without the user’s consent.

Such a mechanism would undermine end-to-end encryption and normalize pre-emptive surveillance built directly into consumer hardware.

The latest figures from German law enforcement offer a clear warning about the risks of expanding this type of surveillance: in 2024, nearly half of all CSAM scanning tips received by Germany were errors.

Keep reading