Virginia to Enforce Verification Law for Social Media on January 1, 2026, Despite Free Speech Concerns

Virginia is preparing to enforce a new online regulation that will curtail how minors access social media, setting up a direct clash between state lawmakers and advocates for digital free expression.

Beginning January 1, 2026, a law known as Senate Bill 854 will compel social media companies to confirm the ages of all users through “commercially reasonable methods” and to restrict anyone under sixteen to one hour of use per platform per day.

We obtained a copy of the bill for you here.

Parents will have the option to override those limits through what the statute calls “verifiable parental consent.”

The measure is written into the state’s Consumer Data Protection Act, and it bars companies from using any information gathered for age checks for any other purpose.

Lawmakers from both parties rallied behind the bill, portraying it as a way to reduce what they described as addictive and harmful online habits among young people.

Delegate Wendell Walker argued that social media “is almost like a drug addiction,” while Delegate Sam Rasoul said that “people are concerned about the addiction of screen time” and accused companies of building algorithms that “keep us more and more addicted.”

Enforcement authority falls to the Office of the Attorney General, which may seek injunctions or impose civil fines reaching $7,500 per violation for noncompliance.

But this policy, framed as a health measure, has triggered strong constitutional objections from the technology industry and free speech advocates.

The trade association NetChoice filed a federal lawsuit (NetChoice v. Miyares) in November 2025, arguing that Virginia’s statute unlawfully restricts access to lawful speech online.

We obtained a copy of the lawsuit for you here.

The complaint draws parallels to earlier moral panics over books, comic strips, rock music, and video games, warning that SB 854 “does not enforce parental authority; it imposes governmental authority, subject only to a parental veto.”

Keep reading

Teen Marijuana Use ‘Remained Stable’ As Legalization Expands, Federal Health Officials Acknowledge

Teen marijuana use “remained stable” this year even as more states have enacted legalization, according to an annual federally funded survey

The Monitoring the Future (MTF) survey—supported by the National Institute on Drug Abuse (NIDA) and conducted every year for decades by the University of Michigan—examines substance use trends among 8th, 10th and 12th grade students. And the latest results add to a large body of evidence contradicting prohibitionist claims that state-level legalization would drive increases in underage cannabis usage.

The rate of past-year marijuana use for 12th graders was 25.7 percent, which is relatively consistent with recent years but at its lowest level since 1992. It was the same case with 10th graders, 15.6 percent of whom used marijuana in the last year. Among 8th grade students, 7.6 percent reported past-year cannabis consumption.

For past-month cannabis use, that rate was 17.1 percent for 12th graders, a slight uptick from the prior year but significantly lower than its record high of 37.1 percent in 1978 before any state had legalized cannabis for adult or medical use. For 10th grade students, the rate this past year was 9.4 percent, and for 8th grade it was 4 percent—consistent with recent years.

“We are encouraged that adolescent drug use remains relatively low and that so many teens choose not to use drugs at all,” NIDA Director Nora Volkow said in a press release. “It is critical to continue to monitor these trends closely to understand how we can continue to support teens in making healthy choices and target interventions where and when they are needed.”

The survey also found that students who reported past-month abstention from marijuana , alcohol and nicotine were “stable for all grades” (66 percent for 12th grade, 82 percent for 10th grade and 91 percent for 8th grade).

The survey also asked about the use of hemp-based cannabinoid products, including intoxicating compounds such as delta-8 THC. It found that 9 percent of 12th graders, 6 percent of 10th graders and 2 percent of 8th graders used products in that category in the past year.

This year’s MTF survey was based on data from 23,726 student surveys submitted from 270 public and private schools from February-June 2025.

To reform advocates, the results of the survey reinforce the idea that creating a regulatory framework for cannabis where licensed retailers must check IDs and implement other security mechanisms to prevent unlawful diversion is a far more effective policy than prohibition, with illicit suppliers whose products may be untested and where age-gating isn’t a strictly enforced regulation.

To that point, a separate federally funded study out of Canada that was released last month found that that youth marijuana use rates actually declined after the country legalized cannabis.

The study was released about three months after German officials released a separate report on their country’s experience with legalizing marijuana nationwide.

Keep reading

Shein Can’t Sell Sex Toys Unless It Checks IDs, French Court Says

Shein, a cheap-stuff superstore based in China that is popular worldwide, cannot sell sex toys unless it checks purchaser IDs, a French court has ruled. The case comes after the French government tried to shut down Shein for three months.

International attention on the case has focused on the fact that Shein—through its third-party vendor marketplace—was temporarily selling what’s been described as “childlike sex dolls.” That’s appalling, of course. But understandable disgust and anger about that aspect has overshadowed a bigger story.

According to the BBC, the court ordered age verification measures to be enacted for the sale of all “adult” items, with a potential fine of €10,000 (about $11,700) for each breach.

Sex Toys: Age Verification’s Next Frontier?

“I don’t live in France and I don’t shop at Shein,” you might be thinking. “Why should I care?”

Because, my friends, this is another sign about where online age verification is going.

Politicians and activists—in the U.S. and around the world—initially pushed age verification measures as a requirement for porn websites. Who could be against stopping kids from watching hardcore pornography? they asked anyone who objected (conveniently eluding the facts that these bans are often broad enough to cover all sorts of sexuality-related material, and that they won’t affect just children but will invade the privacy of countless adults trying to access protected speech).

Then we started hearing about the need to implement age verification measures—checking IDs or requiring facial scans and so on—on all social media platforms. Now we’re hearing about age verification for video games, age verification for vibrators, age verification for everything.

Texas lawmakers earlier this year introduced a measure that would have mandated age verification for sex toy sales online. It failed to advance, but at the rate things are going I don’t think that will be the last we hear of it.

Measures like these could mean anyone who wants to purchase sex toys or sexual wellness devices online will have to attach their identity to the purchase—opening them up to surveillance, hackers, and so on.

Keep reading

The UK’s Plan to Put an Age Verification Chaperone in Every Pocket

UK officials are preparing to urge Apple and Google to redesign their operating systems so that every phone and computer sold in the country can automatically block nude imagery unless the user has proved they are an adult.

The proposal, part of the Home Office’s upcoming plan under the premise of combating violence against women and girls, would rely on technology built directly into devices, with software capable of scanning images locally to detect material.

Under the plan, as reported by FT, such scanning would be turned on by default. Anyone wanting to take, send, or open an explicit photo would first have to verify their age using a government-issued ID or a biometric check.

The goal, officials say, is to prevent children from being exposed to sexual material or drawn into exploitative exchanges online.

People briefed on the discussions said the Home Office had explored the possibility of making these tools a legal requirement but decided, for now, to rely on encouragement rather than legislation.

Even so, the expectation is that large manufacturers will come under intense pressure to comply.

The government’s approach reflects growing anxiety about how easily minors can access sexual content and how grooming can occur through everyday apps.

Instead of copying Australia’s decision to ban social media use for under-16s, British ministers have chosen to focus on controlling imagery itself.

Safeguarding minister Jess Phillips has praised technology firms that already filter content at the device level. She cited HMD Global, maker of Nokia phones, for embedding child-protection software called HarmBlock, created by UK-based SafeToNet, which automatically blocks explicit images from being viewed or shared.

Apple and Google have built smaller-scale systems of their own. Apple’s “Communication Safety” function scans photos in apps like Messages, AirDrop, and FaceTime and warns children when nudity is detected, but teens can ignore the alert.

Google’s Family Link and “sensitive content warnings” work similarly on Android, though they stop short of scanning across all apps. Both companies allow parents to apply restrictions, but neither has a universal filter that covers the entire operating system.

The Home Office wants to go further, calling for a system that would block any nude image unless an adult identity check has been passed.

Keep reading

UK Lawmakers Propose Mandatory On-Device Surveillance and VPN Age Verification

Lawmakers in the United Kingdom are proposing amendments to the Children’s Wellbeing and Schools Bill that would require nearly all smartphones and tablets to include built-in, unremovable surveillance software.

The proposal appears under a section titled “Action to promote the well-being of children by combating child sexual abuse material (CSAM).”

We obtained a copy of the proposed amendments for you here.

The amendment text specifies that any “relevant device supplied for use in the UK must have installed tamper-proof system software which is highly effective at preventing the recording, transmitting (by any means, including livestreaming) and viewing of CSAM using that device.”

It further defines “relevant devices” as “smartphones or tablet computers which are either internet-connectable products or network-connectable products for the purposes of section 5 of the Product Security and Telecommunications Infrastructure Act 2022.”

Under this clause, manufacturers, importers, and distributors would be legally required to ensure that every internet-connected phone or tablet they sell in the UK meets this “CSAM requirement.”

Enforcement would occur “as if the CSAM requirement was a security requirement for the purposes of Part 1 of the Product Security and Telecommunications Infrastructure Act 2022.”

In practical terms, the only way for such software to “prevent the recording, transmitting (by any means, including livestreaming) and viewing of CSAM” would be for devices to continuously scan and analyze all photos, videos, and livestreams handled by the device.

That process would have to take place directly on users’ phones and tablets, examining both personal and encrypted material to determine whether any of it might be considered illegal content. Although the measure is presented as a child-safety protection, its operation would create a system of constant client-side scanning.

This means the software would inspect private communications, media, and files on personal devices without the user’s consent.

Such a mechanism would undermine end-to-end encryption and normalize pre-emptive surveillance built directly into consumer hardware.

The latest figures from German law enforcement offer a clear warning about the risks of expanding this type of surveillance: in 2024, nearly half of all CSAM scanning tips received by Germany were errors.

Keep reading

House Lawmakers Unite in Moral Panic, Advancing 18 “Kids’ Online Safety” Bills That Expand Surveillance and Weaken Privacy

The House Energy & Commerce Subcommittee on Commerce, Manufacturing, and Trade spent its latest markup hearing on Thursday proving that if there’s one bipartisan passion left in Washington, it’s moral panic about the internet.

Eighteen separate bills on “kids’ online safety” were debated, amended, and then promptly advanced to the full committee. Not one was stopped.

Ranking Member Jan Schakowsky (D) set the tone early, describing the bills as “terribly inadequate” and announcing she was “furious.”

She complained that the package “leaves out the big issues that we are fighting for.” If it’s not clear, Schakowsky is complaining that the already-controversial bills don’t go far enough.

Eighteen bills now move forward, eight of which hinge on some form of age verification, which would likely require showing a government ID. Three: App Store Accountability (H.R. 3149), the SCREEN Act (H.R. 1623), and the Parents Over Platforms Act (H.R. 6333), would require it outright.

The other five rely on what lawmakers call the “actual knowledge” or “willful disregard” standards, which sound like legalese but function as a dare to platforms: either know everyone’s age, or risk a lawsuit.

The safest corporate response, of course, would be to treat everyone as a child until they’ve shown ID.

Keep reading

Australia launches youth social media ban it says will be the world’s ‘first domino’

Can children and teenagers be forced off social media en masse? Australia is about to find out.

More than 1 million social media accounts held by users under 16 are set to be deactivated in Australia on Wednesday in a divisive world-first ban that has inflamed a culture war and is being closely watched in the United States and elsewhere.

Social media companies will have to take “reasonable steps” to ensure that under-16s in Australia cannot set up accounts on their platforms and that existing accounts are deactivated or removed.

Australian officials say the landmark ban, which lawmakers swiftly approved late last year, is meant to protect children from addictive social media platforms that experts say can be disastrous for their mental health.

“With one law, we can protect Generation Alpha from being sucked into purgatory by predatory algorithms described by the man who created the feature as ‘behavioral cocaine,’” Communications Minister Anika Wells told the National Press Club in Canberra last week.

While many parents and even their children have welcomed the ban, others say it will hinder young people’s ability to express themselves and connect with others, as well as access online support that is crucial for those from marginalized groups or living in isolated parts of rural Australia. Two 15-year-olds have brought a legal challenge against it to the nation’s highest court.

Supporters say the rest of the world will soon follow the example set by the Australian ban, which faced fierce resistance from social media companies.

“I’ve always referred to this as the first domino, which is why they pushed back,” Julie Inman Grant, who regulates online safety as Australia’s eSafety Commissioner, said at an event in Sydney last week.

Keep reading

Australian Leaders and Legacy Media Celebrates Launch of Online Digital ID Age Verification Law

It was sold as a “historic day,” the kind politicians like to frame with national pride and moral purpose.

Cameras flashed in Canberra as Australia’s Prime Minister Anthony Albanese stood at the podium, declaring victory in the fight to “protect children.”

What Australians actually got was a nationwide digital ID system. Starting December 10, every citizen logging into select online platforms must now pass through digital ID verification, biometric scans, face matching, and document checks, all justified as a way to keep under-16s off social media.

Kids are now banned from certain platforms, but it’s the adults who must hand over their faces, IDs, and biometric data to prove they’re not kids.

“Protecting children” has been converted into a universal surveillance upgrade for everyone.

According to Albanese, who once said if he became a dictator the first thing he would do was ban social media, the Online Safety Amendment (Social Media Minimum Age) Bill 2024 will “change lives.”

He described it as a “profound reform” that will “reverberate around the world,” giving parents “peace of mind” and inspiring “the global community” to copy Australia’s example.

The Prime Minister’s pride, he said, had “never been greater.” Listening to him, you’d think he’d cured cancer rather than making face scans mandatory to log in to Facebook.

Keep reading

Lawmakers To Consider 19 Bills for Childproofing the Internet

Can you judge the heat of a moral panic by the number of bills purporting to solve it? At the height of human trafficking hysteria in the 2010s, every week seemed to bring some new measure meant to help the government tackle the problem (or at least get good press for the bill’s sponsor). Now lawmakers have moved on from sex trafficking to social media—from Craigslist and Backpage to Instagram, TikTok, and Roblox. So here we are, with a House Energy and Commerce subcommittee hearing on 19 different kids-and-tech bills scheduled for this week.

The fun kicks off tomorrow, with legislators discussing yet another version of the Kids Online Safety Act (KOSA)—a dangerous piece of legislation that keeps failing but also refuses to die. (See some of Reason‘s previous coverage of KOSA herehere, and here.)

The new KOSA no longer explicitly says that online platforms have a “duty of care” when it comes to minors—a benign-sounding term that could have chilled speech by requiring companies to somehow protect minors from a huge array of “harms,” from anxiety and depression to disordered eating to spending too much time online. But it still essentially requires this, saying that covered platforms must “establish, implement, maintain, and enforce reasonable policies, practices, and procedure” that address various harms to minors, including threats, sexual exploitation, financial harm, and the “distribution, sale, or use of narcotic drugs, tobacco products, cannabis products, gambling, or alcohol.” And it would give both the states and the Federal Trade Commission the ability to enforce this requirement, declaring any violation an “unfair or deceptive” act that violates the Federal Trade Commission Act.

Despite the change, KOSA’s core function is still “to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to” some harm, as Joe Mullin wrote earlier this year about a similar KOSA update in the Senate.

Language change or not, the bill would still compel platforms to censor a huge array of content out of fear that the government might decide it contributed to some vague category of harm and then sue.

KOSA is bad enough. But far be it for lawmakers to stop there.

Keep reading

As Expected, a Hearing on Kids Online Safety Becomes a Blueprint for Digital ID

The latest congressional hearing on “protecting children online” opened as you would expect: the same characters, the same script, a few new buzzwords, and a familiar moral panic to which the answer is mass surveillance and censorship.

The Subcommittee on Commerce, Manufacturing, and Trade had convened to discuss a set of draft bills packaged as the “Kids Online Safety Package.” The name alone sounded like a software update against civil liberties.

The hearing was called “Legislative Solutions to Protect Children and Teens Online.” Everyone on the dais seemed eager to prove they were on the side of the kids, which meant, as usual, promising to make the internet less free for everyone else.

Rep. Gus Bilirakis (R-FL), who chaired the hearing, kicked things off by assuring everyone that the proposed bills were “mindful of the Constitution’s protections for free speech.”

He then reminded the audience that “laws with good intentions have been struck down for violating the First Amendment” and added, with all the solemnity of a man about to make that same mistake again, that “a law that gets struck down in court does not protect a child.”

They know these bills are legally risky, but they’re going to do it anyway.

Bilirakis’s point was echoed later by House Energy & Commerce Committee Chairman Brett Guthrie (R-KY), who claimed the bills had been “curated to withstand constitutional challenges.” That word, curated, was doing a lot of work.

Guthrie went on to insist that “age verification is needed…even before logging in” to trigger privacy protections under COPPA 2.0.

The irony of requiring people to surrender their private information in order to be protected from privacy violations was lost in the shuffle.

Keep reading