Virginia to Enforce Verification Law for Social Media on January 1, 2026, Despite Free Speech Concerns

Virginia is preparing to enforce a new online regulation that will curtail how minors access social media, setting up a direct clash between state lawmakers and advocates for digital free expression.

Beginning January 1, 2026, a law known as Senate Bill 854 will compel social media companies to confirm the ages of all users through “commercially reasonable methods” and to restrict anyone under sixteen to one hour of use per platform per day.

We obtained a copy of the bill for you here.

Parents will have the option to override those limits through what the statute calls “verifiable parental consent.”

The measure is written into the state’s Consumer Data Protection Act, and it bars companies from using any information gathered for age checks for any other purpose.

Lawmakers from both parties rallied behind the bill, portraying it as a way to reduce what they described as addictive and harmful online habits among young people.

Delegate Wendell Walker argued that social media “is almost like a drug addiction,” while Delegate Sam Rasoul said that “people are concerned about the addiction of screen time” and accused companies of building algorithms that “keep us more and more addicted.”

Enforcement authority falls to the Office of the Attorney General, which may seek injunctions or impose civil fines reaching $7,500 per violation for noncompliance.

But this policy, framed as a health measure, has triggered strong constitutional objections from the technology industry and free speech advocates.

The trade association NetChoice filed a federal lawsuit (NetChoice v. Miyares) in November 2025, arguing that Virginia’s statute unlawfully restricts access to lawful speech online.

We obtained a copy of the lawsuit for you here.

The complaint draws parallels to earlier moral panics over books, comic strips, rock music, and video games, warning that SB 854 “does not enforce parental authority; it imposes governmental authority, subject only to a parental veto.”

Keep reading

New York To Demand Warning Labels On Social Media Platforms

New York is requiring warning labels on social media platforms about addictive features in a bid to address a youth mental health crisis.

Gov. Kathy Hochul signed the bill into law on Dec. 26, targeting infinite scrolling, auto-play videos, and algorithmic feeds that encourage prolonged use.

The law, S4505/A5346, sponsored by Democrats state Sen. Andrew Gounardes and Assemblymember Nily Rozic, requires social media platforms to display non-dismissible warnings when young users first encounter these features and at regular intervals during use.

As The Epoch Times’ Kimberley Hayek details below, the required warnings are based on consumer protections seen on products such as tobacco and alcohol, noting risks like increased anxiety, depression, and poor body image.

“Keeping New Yorkers safe has been my top priority since taking office, and that includes protecting our kids from the potential harms of social media features that encourage excessive use,” Hochul said in a statement.

“New Yorkers deserve transparency. With the amount of information that can be shared online, it is essential that we prioritize mental health and take the steps necessary to ensure that people are aware of any potential risks.”

Studies highlighted in the legislation suggest that teens spending more than three hours daily on social media face doubled risks of anxiety and depression symptoms. About half of adolescents report that platforms worsen their body image, and those with heavy usage are nearly twice as likely to describe their mental health as poor.

“New York families deserve honesty about how social media platforms impact mental health. By requiring warning labels based on the latest medical research, this law puts public health first and finally gives us the tools we need to make informed decisions,” Rozic said in a statement.

“I’m proud to sponsor this legislation alongside Senator Gounardes as part of our broader effort to create a safer digital environment for kids.”

In June 2024, Hochul signed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act, also sponsored by Gounardes and Rozic, mandating parental consent for minors to access addictive algorithms while also banning unsolicited nighttime notifications.

The SAFE Act aims to address how platforms exploit vulnerabilities for engagement while profiting billions in ad revenue from minors. New York Attorney General Letitia James, who helped draft the bill, sought public input on it in 2024.

Keep reading

Junk Food Bans For SNAP Users In Some States Starting 2026: What To Know

Americans using Supplemental Nutrition Assistance Program (SNAP) benefits to purchase groceries may need to adjust their shopping habits in 2026 as some states will prohibit the use of SNAP funds to purchase certain “junk foods.”

Also starting next year, states will have to shoulder a larger portion of the cost of running the program. In addition, states could lose funds if their payment error rate is too high.

Here is what to know about the overhaul of America’s largest nutrition program.

Restrictions on Purchases in Some States

Eighteen states will restrict the purchase of certain foods lacking in nutritional value next year. The changes are being made under the banner of the Make America Healthy Again initiative launched by the Department of Health and Human Services. To institute the changes, the states had to submit and have approved a waiver of federal rules from the Department of Agriculture, which oversees the nutrition program.

The starting dates for the restrictions and the foods prohibited vary by state.

Indiana, Iowa, Nebraska, Utah, and West Virginia will implement purchase restrictions on Jan. 1, 2026. Idaho, Oklahoma, Louisiana, Colorado, Texas, Virginia, and Florida have starting dates from February to April. Arkansas, Tennessee, Hawaii, South Carolina, North Dakota, and Missouri will begin their bans between July and October.

Most of these states have removed candy, soda, and energy drinks from the list of SNAP-eligible items.

In Tennessee and Iowa, SNAP beneficiaries cannot use the funds to purchase processed foods. Tennessee defines a processed food as one that has been changed in any way from its natural state.

Prepared desserts, such as cakes and cookies, are restricted in Florida and Missouri.

In Iowa, foods that are prepared for consumption or come with eating utensils may not be purchased with SNAP funds. Cold, unpackaged foods without utensils, such as bread, fruit, or canned goods, are still permitted.

Keep reading

Australia launches youth social media ban it says will be the world’s ‘first domino’

Can children and teenagers be forced off social media en masse? Australia is about to find out.

More than 1 million social media accounts held by users under 16 are set to be deactivated in Australia on Wednesday in a divisive world-first ban that has inflamed a culture war and is being closely watched in the United States and elsewhere.

Social media companies will have to take “reasonable steps” to ensure that under-16s in Australia cannot set up accounts on their platforms and that existing accounts are deactivated or removed.

Australian officials say the landmark ban, which lawmakers swiftly approved late last year, is meant to protect children from addictive social media platforms that experts say can be disastrous for their mental health.

“With one law, we can protect Generation Alpha from being sucked into purgatory by predatory algorithms described by the man who created the feature as ‘behavioral cocaine,’” Communications Minister Anika Wells told the National Press Club in Canberra last week.

While many parents and even their children have welcomed the ban, others say it will hinder young people’s ability to express themselves and connect with others, as well as access online support that is crucial for those from marginalized groups or living in isolated parts of rural Australia. Two 15-year-olds have brought a legal challenge against it to the nation’s highest court.

Supporters say the rest of the world will soon follow the example set by the Australian ban, which faced fierce resistance from social media companies.

“I’ve always referred to this as the first domino, which is why they pushed back,” Julie Inman Grant, who regulates online safety as Australia’s eSafety Commissioner, said at an event in Sydney last week.

Keep reading

This FTC Workshop Could Legitimize the Push for Online Digital ID Checks

In January 2026, the Federal Trade Commission plans to gather a small army of “experts” in Washington to discuss a topic that sounds technical but reads like a blueprint for a new kind of internet.

Officially, the event is about protecting children. Unofficially, it’s about identifying everyone.

The FTC says the January 28 workshop at the Constitution Center will bring together researchers, policy officials, tech companies, and “consumer representatives” to explore the role of age verification and its relationship to the Children’s Online Privacy Protection Act, or COPPA.

It’s all about collecting and verifying age information, developing technical systems for estimation, and scaling those systems across digital environments.

In government language, that means building tools that could determine who you are before you click anything.

The FTC suggests this is about safeguarding minors. But once these systems exist, they rarely stop where they start. The design of a universal age-verification network could reach far beyond child safety, extending into how all users identify themselves across websites, platforms, and services.

The agency’s agenda suggests a framework for what could become a credential-based web. If a website has to verify your age, it must verify you. And once verified, your information doesn’t evaporate after you log out. It’s stored somewhere, connected to something, waiting for the next access request.

The federal effort comes after a wave of state-level enthusiasm for the same idea. TexasUtahMissouriVirginia, and Ohio have each passed laws forcing websites to check the ages of users, often borrowing language directly from the European UnionAustralia, and the United Kingdom. Those rules require identity documents, biometric scans, or certified third parties that act as digital hall monitors.

In these states, “click to enter” has turned into “show your papers.”

Many sites now require proof of age, while others test-drive digital ID programs linking personal credentials to online activity.

The result is a slow creep toward a system where logging into a website looks a lot like crossing a border.

Keep reading

Lawmakers To Consider 19 Bills for Childproofing the Internet

Can you judge the heat of a moral panic by the number of bills purporting to solve it? At the height of human trafficking hysteria in the 2010s, every week seemed to bring some new measure meant to help the government tackle the problem (or at least get good press for the bill’s sponsor). Now lawmakers have moved on from sex trafficking to social media—from Craigslist and Backpage to Instagram, TikTok, and Roblox. So here we are, with a House Energy and Commerce subcommittee hearing on 19 different kids-and-tech bills scheduled for this week.

The fun kicks off tomorrow, with legislators discussing yet another version of the Kids Online Safety Act (KOSA)—a dangerous piece of legislation that keeps failing but also refuses to die. (See some of Reason‘s previous coverage of KOSA herehere, and here.)

The new KOSA no longer explicitly says that online platforms have a “duty of care” when it comes to minors—a benign-sounding term that could have chilled speech by requiring companies to somehow protect minors from a huge array of “harms,” from anxiety and depression to disordered eating to spending too much time online. But it still essentially requires this, saying that covered platforms must “establish, implement, maintain, and enforce reasonable policies, practices, and procedure” that address various harms to minors, including threats, sexual exploitation, financial harm, and the “distribution, sale, or use of narcotic drugs, tobacco products, cannabis products, gambling, or alcohol.” And it would give both the states and the Federal Trade Commission the ability to enforce this requirement, declaring any violation an “unfair or deceptive” act that violates the Federal Trade Commission Act.

Despite the change, KOSA’s core function is still “to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to” some harm, as Joe Mullin wrote earlier this year about a similar KOSA update in the Senate.

Language change or not, the bill would still compel platforms to censor a huge array of content out of fear that the government might decide it contributed to some vague category of harm and then sue.

KOSA is bad enough. But far be it for lawmakers to stop there.

Keep reading

As Expected, a Hearing on Kids Online Safety Becomes a Blueprint for Digital ID

The latest congressional hearing on “protecting children online” opened as you would expect: the same characters, the same script, a few new buzzwords, and a familiar moral panic to which the answer is mass surveillance and censorship.

The Subcommittee on Commerce, Manufacturing, and Trade had convened to discuss a set of draft bills packaged as the “Kids Online Safety Package.” The name alone sounded like a software update against civil liberties.

The hearing was called “Legislative Solutions to Protect Children and Teens Online.” Everyone on the dais seemed eager to prove they were on the side of the kids, which meant, as usual, promising to make the internet less free for everyone else.

Rep. Gus Bilirakis (R-FL), who chaired the hearing, kicked things off by assuring everyone that the proposed bills were “mindful of the Constitution’s protections for free speech.”

He then reminded the audience that “laws with good intentions have been struck down for violating the First Amendment” and added, with all the solemnity of a man about to make that same mistake again, that “a law that gets struck down in court does not protect a child.”

They know these bills are legally risky, but they’re going to do it anyway.

Bilirakis’s point was echoed later by House Energy & Commerce Committee Chairman Brett Guthrie (R-KY), who claimed the bills had been “curated to withstand constitutional challenges.” That word, curated, was doing a lot of work.

Guthrie went on to insist that “age verification is needed…even before logging in” to trigger privacy protections under COPPA 2.0.

The irony of requiring people to surrender their private information in order to be protected from privacy violations was lost in the shuffle.

Keep reading

YouTube says it will comply with Australia’s teen social media ban

Google’s YouTube shared a “disappointing update” to millions of Australian users and content creators on Wednesday, saying it will comply with a world-first teen social media ban by locking out users aged under 16 from their accounts within days.

The decision ends a stand-off between the internet giant and the Australian government which initially exempted YouTube from the age restriction, citing its use for educational purposes. Google (GOOGL.O) had said it was getting legal advice about how to respond to being included.

“Viewers must now be 16 or older to sign into YouTube,” the company said in a statement.

“This is a disappointing update to share. This law will not fulfill its promise to make kids safer online and will, in fact, make Australian kids less safe on YouTube.”

The Australian ban is being closely watched by other jurisdictions considering similar age-based measures, setting up a potential global precedent for how the mostly U.S. tech giants behind the biggest platforms balance child safety with access to digital services.

The Australian government says the measure responds to mounting evidence that platforms are failing to do enough to protect children from harmful content.

Keep reading

Congress Goes Parental on Social Media and Your Privacy

Washington has finally found a monster big enough for bipartisan unity: the attention economy. In a moment of rare cross-aisle cooperation, lawmakers have introduced two censorship-heavy bills and a tax scheme under the banner of the UnAnxious Generation package.

The name, borrowed from Jonathan Haidt’s pop-psychology hit The Anxious Generation, reveals the obvious pitch: Congress will save America’s children from Silicon Valley through online regulation and speech controls.

Representative Jake Auchincloss of Massachusetts, who has built a career out of publicly scolding tech companies, says he’s going “directly at their jugular.”

The plan: tie legal immunity to content “moderation,” tax the ad money, and make sure kids can’t get near an app without producing an “Age Signal.” If that sounds like a euphemism for surveillance, that’s because it is.

The first bill, the Deepfake Liability Act, revises Section 230, the sacred shield that lets platforms host your political rants, memes, and conspiracy reels without getting sued for them.

Under the new proposal, that immunity becomes conditional on a vague “duty of care” to prevent deepfake porn, cyberstalking, and “digital forgeries.”

TIME’s report doesn’t define that last term, which could be a problem since it sounds like anything from fake celebrity videos to an unflattering AI meme of your senator. If “digital forgery” turns out to include parody or satire, every political cartoonist might suddenly need a lawyer on speed dial.

Auchincloss insists the goal is accountability, not censorship. “If a company knows it’ll be liable for deepfake porn, cyberstalking, or AI-created content, that becomes a board-level problem,” he says. In other words, a law designed to make executives sweat.

But with AI-generated content specifically excluded from Section 230 protections, the bill effectively redefines the internet’s liability protections.

Keep reading

Missouri Locks the Web Behind a “Harmful” Content ID Check

Starting November 30, 2025, people in Missouri will find the digital world reshaped: anyone wishing to visit websites containing “harmful” adult material will need to prove they are at least 18 years old by showing ID.

This new requirement marks Missouri’s entry into the growing group of US states adopting age verification laws for online content. Yet the move does more than restrict access; it raises serious questions about how much personal data people must surrender just to browse freely.

For many, that tradeoff is likely to make privacy tools like VPNs a near necessity rather than a choice.

The law defines its targets broadly. Any site or app where over one-third of the material is classified as “harmful to minors” must block entry until users confirm their age.

Those who do not comply risk penalties that can reach $10,000 a day, with violations categorized as “unfair, deceptive, fraudulent, or otherwise unlawful practices.”

To meet these standards, companies are permitted to check age through digital ID systems, government-issued documents such as driver’s licenses or passports, or existing transactional data that proves a person’s age.

Keep reading