Congress Revives Kids Off Social Media Act, a “Child Safety” Bill Poised to Expand Online Digital ID Checks

Congress is once again positioning itself as the protector of children online, reviving the Kids Off Social Media Act (KOSMA) in a new round of hearings on technology and youth.

We obtained a copy of the bill for you here.

Introduced by Senators Ted Cruz and Brian Schatz, the bill surfaced again during a Senate Commerce Committee session examining the effects of screen time and social media on mental health.

Cruz warned that a “phone-based childhood” has left many kids “lost in the virtual world,” pointing to studies linking heavy screen use to anxiety, depression, and social isolation.

KOSMA’s key provisions would ban social media accounts for anyone under 13 and restrict recommendation algorithms for teens aged 13 to 17.

Pushers of the plan say it would “empower parents” and “hold Big Tech accountable,” but in reality, it shifts control away from families and toward corporate compliance systems.

The bill’s structure leaves companies legally responsible for determining users’ ages, even though it does not directly require age verification.

The legal wording is crucial. KOSMA compels platforms to delete accounts if they have “actual knowledge” or what can be “fairly implied” as knowledge that a user is under 13.

That open-ended standard puts enormous pressure on companies to avoid errors.

The most predictable outcome is a move toward mandatory age verification systems, where users must confirm their age or identity to access social platforms. In effect, KOSMA would link access to everyday online life to a form of digital ID.

That system would not only affect children. It would reach everyone. To prove compliance, companies could require users to submit documents such as driver’s licenses, facial scans, or other biometric data.

The infrastructure needed to verify ages at scale looks almost identical to the infrastructure needed for national digital identity systems. Once built, those systems rarely stay limited to a single use. A measure framed as protecting kids could easily become the foundation for a broader identity-based internet.

Keep reading

Discord Expands Age Verification ID System to More Regions

Discord is pressing forward with government ID checks for users in new regions, even after a major customer-support breach in October 2025 exposed sensitive identity documents belonging to tens of thousands of people.

The expansion of its age-verification system reflects growing pressure under the United Kingdom’s Online Safety Act, a law that effectively compels platforms to collect and process personal identification data in order to comply with its censorship and content-control mandates.

The October 2025 incident highlighted exactly why such measures alarm privacy advocates.

Around 70,000 Discord users had images of government-issued IDs leaked after attackers gained access to a third-party customer service system tied to the company.

The hackers claim to have extracted as much as 1.6 terabytes of information, including 8.4 million support tickets and over 100 gigabytes of transcripts.

Discord disputed the scale but admits the breach stemmed from a compromised contractor account within its outsourced Zendesk environment, not its own internal systems.

Despite the exposure, Discord continues to expand mandatory age-verification. The company’s new “privacy-forward age assurance” program is now required for all UK and Australian users beginning December 9, 2025.

Users must verify that they are over 18 to unblur “sensitive content,” disable message-request filters, or enter age-restricted channels.

Verification occurs through the third-party vendors k-ID and, in some UK cases, Persona, which process either a government ID scan or a facial-analysis selfie to confirm age.

Keep reading

Florida’s “App Store Accountability Act” Would Deputize Big Tech to Verify User IDs for App Access

In Florida, Senator Alexis Calatayud has introduced a proposal that could quietly reshape how millions of Americans experience the digital world.

The App Store Accountability Act (SB 1722), presented as a safeguard for children, would require every app marketplace to identify users by age category, verify that data through “commercially available methods,” and secure recurring parental consent whenever an app’s policies change.

The legislation is ambitious. If enacted, it would take effect in July 2027, with enforcement beginning the following year.

Each violation could carry penalties of up to $7,500, along with injunctions and attorney fees.

On its surface, this is a regulatory measure aimed at strengthening parental oversight and protecting minors from online harms. Yet it hits up against a larger philosophical and rights struggle.

For much of modern political thought, the relationship between authority and liberty has revolved around who decides what constitutes protection. Florida’s proposal situates that question in the hands of private corporations. The bill effectively deputizes Big Tech app store operators, such as Apple and Google, as arbiters of digital identity, compelling them to verify user ages and manage parental permissions across every platform.

Millions of Floridians could be required to submit identifying details or official documents simply to access or update apps. This process, while justified as a measure of security, reintroduces the age-old tension between the protective role of the state and the autonomy of the citizen.

By making identity verification the gateway to digital access, the law risks creating an infrastructure in which surveillance becomes a condition of participation. It is a move from voluntary oversight to systemic authentication, merging the roles of government and corporation in a single mechanism of control.

The proposal may collide with long-established constitutional principles. One of the objections lies in the concept of prior restraint. By conditioning minors’ ability to download or continue using apps on verified Big Tech platforms, the bill requires permission before access, effectively placing all expressive content behind a regulatory gate.

Apps today are not mere entertainment; they are conduits of news, art, religion, and political discourse. Restricting that access risks transforming a parental safeguard into a systemic filter for speech.

The burden falls most heavily on minors, whose First Amendment protections are often ignored in public debate.

Keep reading

Democrats Demand Apple and Google Ban X From App Stores

Apple and Google are under mounting political pressure from Democrats over X’s AI chatbot, Grok, after lawmakers accused the platform of producing images of women and allegedly minors in bikinis.

While the outrage targets X specifically, the ability to generate such material is not unique to one platform. Similar image manipulation and synthetic content creation can be found across nearly every major AI system available today.

Yet, the letter sent to Apple CEO Tim Cook and Google CEO Sundar Pichai by Senators Ron Wyden, Ben Ray Luján, and Ed Markey only asked the tech giants only about X and demanded that the companies remove X from their app stores entirely.

X is used by around 557 million users.

We obtained a copy of the letter for you here.

The lawmakers wrote that “X’s generation of these harmful and likely illegal depictions of women and children has shown complete disregard for your stores’ distribution terms.”

They pointed to Google’s developer rules, which prohibit apps that facilitate “the exploitation or abuse of children,” and Apple’s policy against apps that are “offensive” or “just plain creepy.”

Ignoring the First Amendment completely, “Apple and Google must remove these apps from the app stores until X’s policy violations are addressed,” the letter states.

Dozens of generative systems, including open-source image models that can’t be controlled or limited by anyone, can produce the same kinds of bikini images with minimal prompting.

The senators cited prior examples of Apple and Google removing apps such as ICEBlock and Red Dot under government pressure.

“Unlike Grok’s sickening content generation, these apps were not creating or hosting harmful or illegal content, and yet, based entirely on the Administration’s claims that they posed a risk to immigration enforcers, you removed them from your stores,” the letter stated.

Keep reading

Here’s PROOF That UK’s X Ban Has NOTHING To Do With Protecting Children

As UK authorities ramp up their assault on free speech, a viral post shared by Elon Musk exposes the glaring hypocrisy in the government’s “protect the children” narrative. Data from the The National Society for the Prevention of Cruelty to Children (NSPCC) and police forces reveals Snapchat as the epicenter of online child sexual grooming, dwarfing X’s minimal involvement.

This comes amid Keir Starmer’s escalating war on X, where community notes routinely dismantle government spin, and unfiltered truth is delivered to the masses. If safeguarding kids was the real goal, it would be the likes of Snapchat in the crosshairs, given that thousands of real world child sexual offences have originated from its use.

Instead they’re going after X because, they claim, it provides the ability to make fake images of anyone in a bikini using the inbuilt Grok Ai image generator.

Based on 2025 NSPCC and UK police data, Snapchat is linked to 40-48% of identified child grooming cases, Instagram around 9-11%, Facebook 7-9%, WhatsApp 9%, and X under 2%.

These numbers align with NSPCC’s alarming report on the surge in online grooming. The charity recorded over 7,000 Sexual Communication with a Child offences in 2023/24—an 89% spike since 2017/18.

Keep reading

EU says it is ‘seriously looking’ into Musk’s Grok AI over sexual deepfakes of minors

The European Commission said on Jan 5 it is “very seriously looking” into complaints that Mr Elon Musk’s AI tool Grok is being used to generate and disseminate sexually explicit child-like images.

“Grok is now offering a ‘spicy mode’ showing explicit sexual content with some output generated with child-like images. This is not spicy. This is illegal. This is appalling,” EU digital affairs spokesman Thomas Regnier told reporters.

He added: “This has no place in Europe.”

Complaints of abuse began hitting Mr Musk’s X social media platform, where Grok is available, after an “edit image” button for the generative artificial intelligence tool was rolled out in late December.

But Grok maker xAI, run by Mr Musk, said earlier in January it was scrambling to fix flaws in its AI tool.

The public prosecutor’s office in Paris has also expanded an investigation into X to include new accusations that Grok was being used for generating and disseminating child pornography.

Keep reading

Virginia to Enforce Verification Law for Social Media on January 1, 2026, Despite Free Speech Concerns

Virginia is preparing to enforce a new online regulation that will curtail how minors access social media, setting up a direct clash between state lawmakers and advocates for digital free expression.

Beginning January 1, 2026, a law known as Senate Bill 854 will compel social media companies to confirm the ages of all users through “commercially reasonable methods” and to restrict anyone under sixteen to one hour of use per platform per day.

We obtained a copy of the bill for you here.

Parents will have the option to override those limits through what the statute calls “verifiable parental consent.”

The measure is written into the state’s Consumer Data Protection Act, and it bars companies from using any information gathered for age checks for any other purpose.

Lawmakers from both parties rallied behind the bill, portraying it as a way to reduce what they described as addictive and harmful online habits among young people.

Delegate Wendell Walker argued that social media “is almost like a drug addiction,” while Delegate Sam Rasoul said that “people are concerned about the addiction of screen time” and accused companies of building algorithms that “keep us more and more addicted.”

Enforcement authority falls to the Office of the Attorney General, which may seek injunctions or impose civil fines reaching $7,500 per violation for noncompliance.

But this policy, framed as a health measure, has triggered strong constitutional objections from the technology industry and free speech advocates.

The trade association NetChoice filed a federal lawsuit (NetChoice v. Miyares) in November 2025, arguing that Virginia’s statute unlawfully restricts access to lawful speech online.

We obtained a copy of the lawsuit for you here.

The complaint draws parallels to earlier moral panics over books, comic strips, rock music, and video games, warning that SB 854 “does not enforce parental authority; it imposes governmental authority, subject only to a parental veto.”

Keep reading

Teen Marijuana Use ‘Remained Stable’ As Legalization Expands, Federal Health Officials Acknowledge

Teen marijuana use “remained stable” this year even as more states have enacted legalization, according to an annual federally funded survey

The Monitoring the Future (MTF) survey—supported by the National Institute on Drug Abuse (NIDA) and conducted every year for decades by the University of Michigan—examines substance use trends among 8th, 10th and 12th grade students. And the latest results add to a large body of evidence contradicting prohibitionist claims that state-level legalization would drive increases in underage cannabis usage.

The rate of past-year marijuana use for 12th graders was 25.7 percent, which is relatively consistent with recent years but at its lowest level since 1992. It was the same case with 10th graders, 15.6 percent of whom used marijuana in the last year. Among 8th grade students, 7.6 percent reported past-year cannabis consumption.

For past-month cannabis use, that rate was 17.1 percent for 12th graders, a slight uptick from the prior year but significantly lower than its record high of 37.1 percent in 1978 before any state had legalized cannabis for adult or medical use. For 10th grade students, the rate this past year was 9.4 percent, and for 8th grade it was 4 percent—consistent with recent years.

“We are encouraged that adolescent drug use remains relatively low and that so many teens choose not to use drugs at all,” NIDA Director Nora Volkow said in a press release. “It is critical to continue to monitor these trends closely to understand how we can continue to support teens in making healthy choices and target interventions where and when they are needed.”

The survey also found that students who reported past-month abstention from marijuana , alcohol and nicotine were “stable for all grades” (66 percent for 12th grade, 82 percent for 10th grade and 91 percent for 8th grade).

The survey also asked about the use of hemp-based cannabinoid products, including intoxicating compounds such as delta-8 THC. It found that 9 percent of 12th graders, 6 percent of 10th graders and 2 percent of 8th graders used products in that category in the past year.

This year’s MTF survey was based on data from 23,726 student surveys submitted from 270 public and private schools from February-June 2025.

To reform advocates, the results of the survey reinforce the idea that creating a regulatory framework for cannabis where licensed retailers must check IDs and implement other security mechanisms to prevent unlawful diversion is a far more effective policy than prohibition, with illicit suppliers whose products may be untested and where age-gating isn’t a strictly enforced regulation.

To that point, a separate federally funded study out of Canada that was released last month found that that youth marijuana use rates actually declined after the country legalized cannabis.

The study was released about three months after German officials released a separate report on their country’s experience with legalizing marijuana nationwide.

Keep reading

Shein Can’t Sell Sex Toys Unless It Checks IDs, French Court Says

Shein, a cheap-stuff superstore based in China that is popular worldwide, cannot sell sex toys unless it checks purchaser IDs, a French court has ruled. The case comes after the French government tried to shut down Shein for three months.

International attention on the case has focused on the fact that Shein—through its third-party vendor marketplace—was temporarily selling what’s been described as “childlike sex dolls.” That’s appalling, of course. But understandable disgust and anger about that aspect has overshadowed a bigger story.

According to the BBC, the court ordered age verification measures to be enacted for the sale of all “adult” items, with a potential fine of €10,000 (about $11,700) for each breach.

Sex Toys: Age Verification’s Next Frontier?

“I don’t live in France and I don’t shop at Shein,” you might be thinking. “Why should I care?”

Because, my friends, this is another sign about where online age verification is going.

Politicians and activists—in the U.S. and around the world—initially pushed age verification measures as a requirement for porn websites. Who could be against stopping kids from watching hardcore pornography? they asked anyone who objected (conveniently eluding the facts that these bans are often broad enough to cover all sorts of sexuality-related material, and that they won’t affect just children but will invade the privacy of countless adults trying to access protected speech).

Then we started hearing about the need to implement age verification measures—checking IDs or requiring facial scans and so on—on all social media platforms. Now we’re hearing about age verification for video games, age verification for vibrators, age verification for everything.

Texas lawmakers earlier this year introduced a measure that would have mandated age verification for sex toy sales online. It failed to advance, but at the rate things are going I don’t think that will be the last we hear of it.

Measures like these could mean anyone who wants to purchase sex toys or sexual wellness devices online will have to attach their identity to the purchase—opening them up to surveillance, hackers, and so on.

Keep reading

The UK’s Plan to Put an Age Verification Chaperone in Every Pocket

UK officials are preparing to urge Apple and Google to redesign their operating systems so that every phone and computer sold in the country can automatically block nude imagery unless the user has proved they are an adult.

The proposal, part of the Home Office’s upcoming plan under the premise of combating violence against women and girls, would rely on technology built directly into devices, with software capable of scanning images locally to detect material.

Under the plan, as reported by FT, such scanning would be turned on by default. Anyone wanting to take, send, or open an explicit photo would first have to verify their age using a government-issued ID or a biometric check.

The goal, officials say, is to prevent children from being exposed to sexual material or drawn into exploitative exchanges online.

People briefed on the discussions said the Home Office had explored the possibility of making these tools a legal requirement but decided, for now, to rely on encouragement rather than legislation.

Even so, the expectation is that large manufacturers will come under intense pressure to comply.

The government’s approach reflects growing anxiety about how easily minors can access sexual content and how grooming can occur through everyday apps.

Instead of copying Australia’s decision to ban social media use for under-16s, British ministers have chosen to focus on controlling imagery itself.

Safeguarding minister Jess Phillips has praised technology firms that already filter content at the device level. She cited HMD Global, maker of Nokia phones, for embedding child-protection software called HarmBlock, created by UK-based SafeToNet, which automatically blocks explicit images from being viewed or shared.

Apple and Google have built smaller-scale systems of their own. Apple’s “Communication Safety” function scans photos in apps like Messages, AirDrop, and FaceTime and warns children when nudity is detected, but teens can ignore the alert.

Google’s Family Link and “sensitive content warnings” work similarly on Android, though they stop short of scanning across all apps. Both companies allow parents to apply restrictions, but neither has a universal filter that covers the entire operating system.

The Home Office wants to go further, calling for a system that would block any nude image unless an adult identity check has been passed.

Keep reading