Missouri Locks the Web Behind a “Harmful” Content ID Check

Starting November 30, 2025, people in Missouri will find the digital world reshaped: anyone wishing to visit websites containing “harmful” adult material will need to prove they are at least 18 years old by showing ID.

This new requirement marks Missouri’s entry into the growing group of US states adopting age verification laws for online content. Yet the move does more than restrict access; it raises serious questions about how much personal data people must surrender just to browse freely.

For many, that tradeoff is likely to make privacy tools like VPNs a near necessity rather than a choice.

The law defines its targets broadly. Any site or app where over one-third of the material is classified as “harmful to minors” must block entry until users confirm their age.

Those who do not comply risk penalties that can reach $10,000 a day, with violations categorized as “unfair, deceptive, fraudulent, or otherwise unlawful practices.”

To meet these standards, companies are permitted to check age through digital ID systems, government-issued documents such as driver’s licenses or passports, or existing transactional data that proves a person’s age.

Keep reading

GOP-Controlled Senate Committee Warns DC That Marijuana Is Federally Illegal, With ‘Enhanced Penalties’ For Sales Near Schools

GOP members of a powerful Senate committee are issuing a reminder that marijuana remains illegal under federal law and that the sale of cannabis near public schools and playgrounds can carry “enhanced penalties”—an issue they are specifically highlighting in relation to the location of dispensaries in Washington, D.C.

The Republican majority in the Senate Appropriations Committee released the text of a Financial Services and General Government (FSGG) spending bill and an attached report on Tuesday. As expected, the legislation itself retains a rider long championed by Rep. Andy Harris (R-MD) barring D.C. from using its tax dollars to legalize and regulate recreational marijuana sales, despite voters approving a ballot initiative to allow possession and home cultivation more than a decade ago.

In the report, a section on funding for “emergency planning and security costs” associated with the federal government’s presence in the District includes additional language related to cannabis enforcement and zoning issues.

Here’s the text of that section:

Marijuana Dispensary Proximity to Schools—The Committee reminds the District that the distribution, manufacturing, and sale of marijuana remains illegal under Federal law, which includes enhanced penalties for such distribution within one thousand feet of a public or private elementary, vocational, or secondary school or public or private college, junior college, or university, or a playground, among other real property where children frequent.”

The report language is being released months after anti-marijuana organizations formally narced on several locally licensed cannabis businesses in D.C.—sending a letter to President Donald Trump, the U.S. attorney general and a federal prosecutor that identifies dispensaries they allege are too close to schools despite approval from District officials.

The groups said that while they were “pleased” to see former interim U.S. Attorney Ed Martin “take initial steps against one of the worst offenders” by threatening a locally licensed medical marijuana dispensary with criminal prosecution back in March, “we have not seen any public progress since then.”

Martin, for his part, has since been tapped by Trump to serve as U.S. pardon attorney.

Meanwhile, the underlying FSGG spending bill put forward by the committee’s GOP majority would continue to prohibit D.C. from creating a regulated, commercial cannabis market.

Keep reading

Texas: ID Will Be Linked to Every Google Search! New Law Requires Age Verification

Texas SB2420, known as the App Store Accountability Act, requires app stores to verify the age of users and obtain parental consent for those under 18. This law aims to enhance protections for minors using mobile applications and is set to take effect on January 1, 2026.

Texas has joined a multi-state crusade to enforce digital identification in America—marketed as a way to “protect children.”

Yet privacy experts say the real goal isn’t child protection—it’s control. 

Roblox insists its new “age estimation” system improves safety, but it relies on biometric and government data—creating the foundation for permanent digital tracking. With Texas now the fifth state to join the campaign, one question remains: how long before “protecting kids” becomes the excuse to monitor everyone?

From Reclaim the Net:

Texas Sues Roblox Over Child Safety Failures, Joining Multi-State Push for Digital ID

Texas has become the latest state to take legal action against Roblox, joining a growing number of attorneys general who accuse the gaming platform of failing to protect children.

The case also renews attention on the broader push for online age verification, a move that would lead to widespread digital ID requirements.

Attorney General Ken Paxton filed the lawsuit on November 6, alleging that Roblox allowed predators to exploit children while misleading families about safety protections.

We obtained a copy of the lawsuit for you here.

Keep reading

Wisconsin Lawmakers Propose VPN Ban and ID Checks on Adult Sites

Wisconsin legislators have found a new villain in their quest to save people from themselves: the Virtual Private Network.

The state’s latest moral technology initiative, split into Assembly Bill 105 and Senate Bill 130, would force adult websites to verify user ages and ban anyone connecting through a VPN.

It passed the Assembly in March and now waits in the Senate, where someone will have to pretend this is enforceable.

Supporters are selling the plan as a way to “protect minors from explicit material.”

The bill’s machinery reads like a privacy demolition project written by people who still call tech support to reset passwords.

The law would apply to any site that “knowingly and intentionally publishes or distributes material harmful to minors.” It then defines that material as anything lacking “serious literary, artistic, political, or scientific value for minors.”

The wording is broad enough to rope in half the internet, yet somehow manages to exclude “bona fide news” (as to be determined by the state) and cloud platforms that don’t create the content themselves.

Whether that covers social media depends on who you ask: lawyers, lobbyists, or whichever intern wrote the definitions section.

The bill instructs websites to delete verification data after access is granted or denied.

That sounds good until you recall how the tech industry handles deletion promises.

Au10tix left user records exposed for a year after pledging to delete them within 30 days. Tea suffered multiple breaches despite assurances of immediate deletion. In the real world, “deleted” often means “archived on an unsecured server until a hacker finds it.”

The headline feature is a rule penalizing anyone who uses a VPN to access restricted material. VPNs encrypt internet traffic and disguise user locations, which lawmakers apparently see as a threat to order.

The logic is that if people can hide their IP addresses, the state can’t check their ID to ensure they’re old enough to view certain content. That’s technically true and philosophically disturbing.

Officials in other places are already cheering this idea. Michigan introduced a proposal requiring internet providers to detect and block VPN traffic.

If Wisconsin adopts the rule, VPN users would become collateral damage. Journalists, activists, and everyday users who rely on encryption for safety would be swept up in the ban.

Keep reading

Lawmakers Want Proof of ID Before You Talk to AI

It was only a matter of time before someone in Congress decided that the cure for the internet’s ills was to make everyone show their papers.

The “Guidelines for User Age-verification and Responsible Dialogue Act of 2025,” or GUARD Act, has arrived to do just that.

We obtained a copy of the bill for you here.

Introduced by Senators Josh Hawley and Richard Blumenthal, the bill promises to “protect kids” from AI chatbots that allegedly whisper bad ideas into young ears.

The idea: force every chatbot developer in the country to check users’ ages with verified identification.

The senators call it “reasonable age verification.”

That means scanning your driver’s license or passport before you can talk to a digital assistant.

Keeping in mind that AI is being added to pretty much everything these days, the implications of this could be far-reaching.

Keep reading

Florida Attorney Sues Roku Over Failure to Implement Age Verification, Privacy Concerns

Florida’s attorney general has filed a lawsuit against Roku, drawing attention to the growing privacy risks tied to smart devices that quietly track user behavior.

The case, brought by Attorney General James Uthmeier under the Florida Digital Bill of Rights, accuses the streaming company of collecting and selling the personal data of children without consent while refusing to take reasonable steps to determine which users are minors.

We obtained a copy of the lawsuit for you here.

The lawsuit portrays Roku as a company that profits from extensive data collection inside homes, including data from children. According to the complaint, Roku “collected, sold and enabled reidentification of sensitive personal data, including viewing habits, voice recordings and other information from children, without authorization or meaningful notice to Florida families.”

It continues, “Roku knows that some of its users are children but has consciously decided not to implement industry-standard user profiles to identify which of its users are children.”

Another passage states, “Roku buries its head in the sand so that it can continue processing and selling children’s valuable personal and sensitive data.”

The growing push for digital ID–based age verification is being framed as a way to protect children online, but privacy advocates warn it would do the opposite.

Keep reading

Instagram says it’s safeguarding teens by limiting them to PG-13 content

Meta says teenagers on Instagram will be restricted to seeing PG-13 content by default and won’t be able to change their settings without a parent’s permission

Instagram says it’s safeguarding teens by limiting them to PG-13 contentBy BARBARA ORTUTAYAP Technology WriterThe Associated Press

Teenagers on Instagram will be restricted to seeing PG-13 content by default and won’t be able to change their settings without a parent’s permission, Meta announced on Tuesday.

This means kids using teen-specific accounts will see photos and videos on Instagram that are similar to what they would see in a PG-13 movie — no sex, drugs or dangerous stunts, among others.

“This includes hiding or not recommending posts with strong language, certain risky stunts, and additional content that could encourage potentially harmful behaviors, such as posts showing marijuana paraphernalia,” Meta said in a blog post Tuesday, calling the update the most significant since it introduced teen accounts last year.

Anyone under 18 who signs up for Instagram is automatically placed into restrictive teen accounts unless a parent or guardian gives them permission to opt out. The teen accounts are private by default, have usage restrictions on them and already filter out more “sensitive” content — such as those promoting cosmetic procedures. But kids often lie about their ages when they sign up for social media, and while Meta has began using artificial intelligence to find such accounts, the company declined to say how many adult accounts it has determined to be minors since rolling out the feature earlier this year.

The company is also adding an even stricter setting that parents can set up for their children.

The changes come as the social media giant faces relentless criticism over harms to children. As it seeks to add safeguards for younger users, Meta has already promised it wouldn’t show inappropriate content to teens, such as posts about self-harm, eating disorders or suicide.

But this does not always work. A recent report, for instance, found that teen accounts researchers created were recommended age-inappropriate sexual content, including “graphic sexual descriptions, the use of cartoons to describe demeaning sexual acts, and brief displays of nudity.”

Keep reading

Whoops—Ohio Accidentally Excludes Most Major Porn Platforms From Anti-Porn Law

Remember when people used to say “Epic FAIL”? I’m sorry, but there’s no other way to describe Ohio’s new age verification law, which took effect on September 30.

A variation on a mandate that’s been sweeping U.S. statehouses, this law requires online platforms offering “material harmful to juveniles”—by which authorities mean porn—to check photo IDs or use “transactional data” (such as mortgage, education, and employment records) to verify that all visitors are adults.

But lawmakers have written the law in such a way that it excludes most major porn publishing platforms.

“This is why you don’t rush [age verification] bills into an omnibus,” commented the Free Speech Coalition’s Mike Stabile on Bluesky.

Ohio Republican lawmakers introduced a standalone age verification bill back in February, but it languished in a House committee. A similar bill introduced in 2024 also failed to advance out of committee.

The version that wound up passing this year did so as part of the state’s omnibus budget legislation (House Bill 96). This massive measure—more than 3,000 pages—includes a provision that any organization that “disseminates, provides, exhibits, or presents any material or performance that is obscene or harmful to juveniles on the internet” must verify that anyone attempting to view that material is at least 18 years old.

The bill also states that such organizations must “utilize a geofence system maintained and monitored by a licensed location-based technology provider to dynamically monitor the geolocation of persons.”

Existing Ohio law defines material harmful to juveniles as “any material or performance describing or representing nudity, sexual conduct, sexual excitement, or sado-masochistic abuse” that “appeals to the prurient interest of juveniles in sex,” is “patently offensive to prevailing standards in the adult community as a whole with respect to what is suitable for juveniles,” and “lacks serious literary, artistic, political, and scientific value for juveniles.”

Under the new law, online distributors of “material harmful to juveniles” that don’t comply with the age check requirement could face civil actions initiated by Ohio’s attorney general.

Supporters of the law portrayed it as a way to stop young Ohioans from being able to access online porn entirely. But the biggest purveyors of online porn—including Pornhub and similar platforms, which allow users to upload as well as view content—seem to be exempt from the law.

Among the organizations exempted from age verification requirements are providers of “an interactive computer service,” which is defined by Ohio lawmakers as having the same meaning as it does under federal law.

The federal law that defines “interactive computer service”—Section 230 of the Communications Decency Act—says it “means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”

That’s a bit of a mouthful, but we have decades of jurisprudence parsing that definition. And it basically means any platform where third parties can create accounts and can generate content, from social media sites to dating apps, message boards, classified ads, search engines, comment sections, and much more.

Platforms like Pornhub unambiguously fall within this category.

In fact, Pornhub is not blocking Ohio users as it has in most other states with age verification laws for online porn, because its parent company, Aylo, does not believe the law applies to it.

“As a provider of an ‘interactive computer service’ as defined under Section 230 of the Communications Decency Act, it is our understanding that we are not subject to the obligations under section 1349.10 of the Ohio Revised Code regarding mandated age verification for the ‘interactive computer services’ we provide, such as Pornhub,” Aylo told Mashable.

Keep reading

New York Wants Online Digital ID Rules for Social Media Feeds Under “SAFE For Kids Act”

New York is advancing a set of proposed regulations that would require social media platforms to verify users’ ages before granting access to algorithm-driven feeds or allowing nighttime alerts.

Attorney General Letitia James introduced the draft rules on Monday, tied to the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, which was signed into law last year by Governor Kathy Hochul.

Presented as part of an effort to reduce mental health harms linked to social media, the law would compel platforms to restrict algorithmic content for anyone under 18 or anyone who hasn’t completed an age verification process, which would mean the introduction of digital ID checks to access online platforms.

In those cases, users would be limited to seeing content in chronological order from accounts they already follow.

Platforms would also be barred from sending notifications between 12 a.m. and 6 a.m. to those users.

The rules give companies some flexibility in how they confirm a user’s age, as long as the method is considered effective and designed to protect personal data.

Acceptable alternatives to submitting a government ID include facial analysis that estimates age. Any identifying information collected during verification must be deleted “immediately,” according to the proposal.

For minors to access personalized algorithmic feeds, parental permission would be required.

That too involves a verification step, with the same data-deletion requirements in place once the process is complete.

The SAFE For Kids Act targets platforms where user-generated content is central and where at least 20 percent of time spent involves engagement with feeds tailored to user behavior or device data.

Keep reading

Australia enforces world’s harshest social media age crackdown

Australia is introducing the world’s toughest rules to keep children off social media, with platforms facing fines of up to $49.5 million if they fail to detect and remove underage users.

From December 10, social media companies must actively identify and deactivate accounts belonging to users under 16, block re-registration attempts, and provide proper appeals processes. Communications Minister Anika Wells has unveiled a list of “reasonable steps” platforms such as TikTok, Snapchat, Instagram, Facebook and YouTube must follow.

The measures demand that age assurance technology not be a “set-and-forget” system and cannot rely solely on self-declaration. Platforms are encouraged to adopt a layered or “waterfall approach” using multiple checks across the user experience to detect underage accounts. They must also remove existing accounts “with care and clear communication” and provide accessible review options for those who believe they were wrongly flagged.

Wells and controversial eSafety commissioner Julie Inman Grant will present the guidance directly to tech companies during a visit to the United States later this month. After trials proved the technology exists to meet the requirements, Wells said there is no excuse for companies to fall short.

Keep reading