Lawmakers To Consider 19 Bills for Childproofing the Internet

Can you judge the heat of a moral panic by the number of bills purporting to solve it? At the height of human trafficking hysteria in the 2010s, every week seemed to bring some new measure meant to help the government tackle the problem (or at least get good press for the bill’s sponsor). Now lawmakers have moved on from sex trafficking to social media—from Craigslist and Backpage to Instagram, TikTok, and Roblox. So here we are, with a House Energy and Commerce subcommittee hearing on 19 different kids-and-tech bills scheduled for this week.

The fun kicks off tomorrow, with legislators discussing yet another version of the Kids Online Safety Act (KOSA)—a dangerous piece of legislation that keeps failing but also refuses to die. (See some of Reason‘s previous coverage of KOSA herehere, and here.)

The new KOSA no longer explicitly says that online platforms have a “duty of care” when it comes to minors—a benign-sounding term that could have chilled speech by requiring companies to somehow protect minors from a huge array of “harms,” from anxiety and depression to disordered eating to spending too much time online. But it still essentially requires this, saying that covered platforms must “establish, implement, maintain, and enforce reasonable policies, practices, and procedure” that address various harms to minors, including threats, sexual exploitation, financial harm, and the “distribution, sale, or use of narcotic drugs, tobacco products, cannabis products, gambling, or alcohol.” And it would give both the states and the Federal Trade Commission the ability to enforce this requirement, declaring any violation an “unfair or deceptive” act that violates the Federal Trade Commission Act.

Despite the change, KOSA’s core function is still “to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to” some harm, as Joe Mullin wrote earlier this year about a similar KOSA update in the Senate.

Language change or not, the bill would still compel platforms to censor a huge array of content out of fear that the government might decide it contributed to some vague category of harm and then sue.

KOSA is bad enough. But far be it for lawmakers to stop there.

Keep reading

As Expected, a Hearing on Kids Online Safety Becomes a Blueprint for Digital ID

The latest congressional hearing on “protecting children online” opened as you would expect: the same characters, the same script, a few new buzzwords, and a familiar moral panic to which the answer is mass surveillance and censorship.

The Subcommittee on Commerce, Manufacturing, and Trade had convened to discuss a set of draft bills packaged as the “Kids Online Safety Package.” The name alone sounded like a software update against civil liberties.

The hearing was called “Legislative Solutions to Protect Children and Teens Online.” Everyone on the dais seemed eager to prove they were on the side of the kids, which meant, as usual, promising to make the internet less free for everyone else.

Rep. Gus Bilirakis (R-FL), who chaired the hearing, kicked things off by assuring everyone that the proposed bills were “mindful of the Constitution’s protections for free speech.”

He then reminded the audience that “laws with good intentions have been struck down for violating the First Amendment” and added, with all the solemnity of a man about to make that same mistake again, that “a law that gets struck down in court does not protect a child.”

They know these bills are legally risky, but they’re going to do it anyway.

Bilirakis’s point was echoed later by House Energy & Commerce Committee Chairman Brett Guthrie (R-KY), who claimed the bills had been “curated to withstand constitutional challenges.” That word, curated, was doing a lot of work.

Guthrie went on to insist that “age verification is needed…even before logging in” to trigger privacy protections under COPPA 2.0.

The irony of requiring people to surrender their private information in order to be protected from privacy violations was lost in the shuffle.

Keep reading

YouTube says it will comply with Australia’s teen social media ban

Google’s YouTube shared a “disappointing update” to millions of Australian users and content creators on Wednesday, saying it will comply with a world-first teen social media ban by locking out users aged under 16 from their accounts within days.

The decision ends a stand-off between the internet giant and the Australian government which initially exempted YouTube from the age restriction, citing its use for educational purposes. Google (GOOGL.O) had said it was getting legal advice about how to respond to being included.

“Viewers must now be 16 or older to sign into YouTube,” the company said in a statement.

“This is a disappointing update to share. This law will not fulfill its promise to make kids safer online and will, in fact, make Australian kids less safe on YouTube.”

The Australian ban is being closely watched by other jurisdictions considering similar age-based measures, setting up a potential global precedent for how the mostly U.S. tech giants behind the biggest platforms balance child safety with access to digital services.

The Australian government says the measure responds to mounting evidence that platforms are failing to do enough to protect children from harmful content.

Keep reading

Congress Goes Parental on Social Media and Your Privacy

Washington has finally found a monster big enough for bipartisan unity: the attention economy. In a moment of rare cross-aisle cooperation, lawmakers have introduced two censorship-heavy bills and a tax scheme under the banner of the UnAnxious Generation package.

The name, borrowed from Jonathan Haidt’s pop-psychology hit The Anxious Generation, reveals the obvious pitch: Congress will save America’s children from Silicon Valley through online regulation and speech controls.

Representative Jake Auchincloss of Massachusetts, who has built a career out of publicly scolding tech companies, says he’s going “directly at their jugular.”

The plan: tie legal immunity to content “moderation,” tax the ad money, and make sure kids can’t get near an app without producing an “Age Signal.” If that sounds like a euphemism for surveillance, that’s because it is.

The first bill, the Deepfake Liability Act, revises Section 230, the sacred shield that lets platforms host your political rants, memes, and conspiracy reels without getting sued for them.

Under the new proposal, that immunity becomes conditional on a vague “duty of care” to prevent deepfake porn, cyberstalking, and “digital forgeries.”

TIME’s report doesn’t define that last term, which could be a problem since it sounds like anything from fake celebrity videos to an unflattering AI meme of your senator. If “digital forgery” turns out to include parody or satire, every political cartoonist might suddenly need a lawyer on speed dial.

Auchincloss insists the goal is accountability, not censorship. “If a company knows it’ll be liable for deepfake porn, cyberstalking, or AI-created content, that becomes a board-level problem,” he says. In other words, a law designed to make executives sweat.

But with AI-generated content specifically excluded from Section 230 protections, the bill effectively redefines the internet’s liability protections.

Keep reading

Congress Pushes for Nationwide Internet Age Verification Plan.

Republican lawmakers are proposing a new way to enforce accountability on tech companies to comply with age verification laws, despite resistance from websites like Pornhub. The App Store Accountability Act (ASA), introduced by Senator Mike Lee (R-UT) and Representative John James (R-MI), proposes a different model: requiring app stores themselves to verify users’ ages and pass that information to apps when they are downloaded.

The bill is part of a broader push in Congress to tighten safeguards for minors online and has earned support from major tech companies, including Facebook parent company Meta, Pinterest, and Snap. Pinterest CEO Bill Ready argues that one standard would simplify the process and reduce the confusion created by a patchwork of state requirements. “The need for a federal standard is urgent,” he said.

“I think most people at most of these companies probably do want to protect kids,” Sen. Lee said, adding that support from tech companies like Pinterest “makes a big difference.”

However, the proposal faces resistance from civil liberties groups and digital rights advocates. Critics warn that compulsory age verification could limit access to lawful online content, raising First Amendment concerns. They also cite significant privacy risks, arguing that systems requiring users to submit sensitive personal information could expose them to data breaches or misuse.

Some major websites have rejected attempts to enforce online age verification. Pornhub has withdrawn its services from states that require government-issued ID or similar credentials for access to adult material. The company argued that these laws push users toward unregulated platforms while forcing supposedly legitimate sites to collect data they would prefer not to hold.

In 2025, the Supreme Court upheld a state age-verification law for explicit content in Texas, with the majority concluding that states may require age checks to prevent minors from viewing harmful material.

Supporters of federal action contend that the ASA would avoid the growing compliance difficulties posed by differing state regulations. Sen. Lee has stated, “I don’t believe that there’s anything unlawful, unconstitutional, or otherwise problematic about this legislation,” arguing that an app-store-centered approach would reduce repeated verification across multiple platforms.

Keep reading

Missouri Locks the Web Behind a “Harmful” Content ID Check

Starting November 30, 2025, people in Missouri will find the digital world reshaped: anyone wishing to visit websites containing “harmful” adult material will need to prove they are at least 18 years old by showing ID.

This new requirement marks Missouri’s entry into the growing group of US states adopting age verification laws for online content. Yet the move does more than restrict access; it raises serious questions about how much personal data people must surrender just to browse freely.

For many, that tradeoff is likely to make privacy tools like VPNs a near necessity rather than a choice.

The law defines its targets broadly. Any site or app where over one-third of the material is classified as “harmful to minors” must block entry until users confirm their age.

Those who do not comply risk penalties that can reach $10,000 a day, with violations categorized as “unfair, deceptive, fraudulent, or otherwise unlawful practices.”

To meet these standards, companies are permitted to check age through digital ID systems, government-issued documents such as driver’s licenses or passports, or existing transactional data that proves a person’s age.

Keep reading

GOP-Controlled Senate Committee Warns DC That Marijuana Is Federally Illegal, With ‘Enhanced Penalties’ For Sales Near Schools

GOP members of a powerful Senate committee are issuing a reminder that marijuana remains illegal under federal law and that the sale of cannabis near public schools and playgrounds can carry “enhanced penalties”—an issue they are specifically highlighting in relation to the location of dispensaries in Washington, D.C.

The Republican majority in the Senate Appropriations Committee released the text of a Financial Services and General Government (FSGG) spending bill and an attached report on Tuesday. As expected, the legislation itself retains a rider long championed by Rep. Andy Harris (R-MD) barring D.C. from using its tax dollars to legalize and regulate recreational marijuana sales, despite voters approving a ballot initiative to allow possession and home cultivation more than a decade ago.

In the report, a section on funding for “emergency planning and security costs” associated with the federal government’s presence in the District includes additional language related to cannabis enforcement and zoning issues.

Here’s the text of that section:

Marijuana Dispensary Proximity to Schools—The Committee reminds the District that the distribution, manufacturing, and sale of marijuana remains illegal under Federal law, which includes enhanced penalties for such distribution within one thousand feet of a public or private elementary, vocational, or secondary school or public or private college, junior college, or university, or a playground, among other real property where children frequent.”

The report language is being released months after anti-marijuana organizations formally narced on several locally licensed cannabis businesses in D.C.—sending a letter to President Donald Trump, the U.S. attorney general and a federal prosecutor that identifies dispensaries they allege are too close to schools despite approval from District officials.

The groups said that while they were “pleased” to see former interim U.S. Attorney Ed Martin “take initial steps against one of the worst offenders” by threatening a locally licensed medical marijuana dispensary with criminal prosecution back in March, “we have not seen any public progress since then.”

Martin, for his part, has since been tapped by Trump to serve as U.S. pardon attorney.

Meanwhile, the underlying FSGG spending bill put forward by the committee’s GOP majority would continue to prohibit D.C. from creating a regulated, commercial cannabis market.

Keep reading

Texas: ID Will Be Linked to Every Google Search! New Law Requires Age Verification

Texas SB2420, known as the App Store Accountability Act, requires app stores to verify the age of users and obtain parental consent for those under 18. This law aims to enhance protections for minors using mobile applications and is set to take effect on January 1, 2026.

Texas has joined a multi-state crusade to enforce digital identification in America—marketed as a way to “protect children.”

Yet privacy experts say the real goal isn’t child protection—it’s control. 

Roblox insists its new “age estimation” system improves safety, but it relies on biometric and government data—creating the foundation for permanent digital tracking. With Texas now the fifth state to join the campaign, one question remains: how long before “protecting kids” becomes the excuse to monitor everyone?

From Reclaim the Net:

Texas Sues Roblox Over Child Safety Failures, Joining Multi-State Push for Digital ID

Texas has become the latest state to take legal action against Roblox, joining a growing number of attorneys general who accuse the gaming platform of failing to protect children.

The case also renews attention on the broader push for online age verification, a move that would lead to widespread digital ID requirements.

Attorney General Ken Paxton filed the lawsuit on November 6, alleging that Roblox allowed predators to exploit children while misleading families about safety protections.

We obtained a copy of the lawsuit for you here.

Keep reading

Wisconsin Lawmakers Propose VPN Ban and ID Checks on Adult Sites

Wisconsin legislators have found a new villain in their quest to save people from themselves: the Virtual Private Network.

The state’s latest moral technology initiative, split into Assembly Bill 105 and Senate Bill 130, would force adult websites to verify user ages and ban anyone connecting through a VPN.

It passed the Assembly in March and now waits in the Senate, where someone will have to pretend this is enforceable.

Supporters are selling the plan as a way to “protect minors from explicit material.”

The bill’s machinery reads like a privacy demolition project written by people who still call tech support to reset passwords.

The law would apply to any site that “knowingly and intentionally publishes or distributes material harmful to minors.” It then defines that material as anything lacking “serious literary, artistic, political, or scientific value for minors.”

The wording is broad enough to rope in half the internet, yet somehow manages to exclude “bona fide news” (as to be determined by the state) and cloud platforms that don’t create the content themselves.

Whether that covers social media depends on who you ask: lawyers, lobbyists, or whichever intern wrote the definitions section.

The bill instructs websites to delete verification data after access is granted or denied.

That sounds good until you recall how the tech industry handles deletion promises.

Au10tix left user records exposed for a year after pledging to delete them within 30 days. Tea suffered multiple breaches despite assurances of immediate deletion. In the real world, “deleted” often means “archived on an unsecured server until a hacker finds it.”

The headline feature is a rule penalizing anyone who uses a VPN to access restricted material. VPNs encrypt internet traffic and disguise user locations, which lawmakers apparently see as a threat to order.

The logic is that if people can hide their IP addresses, the state can’t check their ID to ensure they’re old enough to view certain content. That’s technically true and philosophically disturbing.

Officials in other places are already cheering this idea. Michigan introduced a proposal requiring internet providers to detect and block VPN traffic.

If Wisconsin adopts the rule, VPN users would become collateral damage. Journalists, activists, and everyday users who rely on encryption for safety would be swept up in the ban.

Keep reading

Lawmakers Want Proof of ID Before You Talk to AI

It was only a matter of time before someone in Congress decided that the cure for the internet’s ills was to make everyone show their papers.

The “Guidelines for User Age-verification and Responsible Dialogue Act of 2025,” or GUARD Act, has arrived to do just that.

We obtained a copy of the bill for you here.

Introduced by Senators Josh Hawley and Richard Blumenthal, the bill promises to “protect kids” from AI chatbots that allegedly whisper bad ideas into young ears.

The idea: force every chatbot developer in the country to check users’ ages with verified identification.

The senators call it “reasonable age verification.”

That means scanning your driver’s license or passport before you can talk to a digital assistant.

Keeping in mind that AI is being added to pretty much everything these days, the implications of this could be far-reaching.

Keep reading