Whoops—Ohio Accidentally Excludes Most Major Porn Platforms From Anti-Porn Law

Remember when people used to say “Epic FAIL”? I’m sorry, but there’s no other way to describe Ohio’s new age verification law, which took effect on September 30.

A variation on a mandate that’s been sweeping U.S. statehouses, this law requires online platforms offering “material harmful to juveniles”—by which authorities mean porn—to check photo IDs or use “transactional data” (such as mortgage, education, and employment records) to verify that all visitors are adults.

But lawmakers have written the law in such a way that it excludes most major porn publishing platforms.

“This is why you don’t rush [age verification] bills into an omnibus,” commented the Free Speech Coalition’s Mike Stabile on Bluesky.

Ohio Republican lawmakers introduced a standalone age verification bill back in February, but it languished in a House committee. A similar bill introduced in 2024 also failed to advance out of committee.

The version that wound up passing this year did so as part of the state’s omnibus budget legislation (House Bill 96). This massive measure—more than 3,000 pages—includes a provision that any organization that “disseminates, provides, exhibits, or presents any material or performance that is obscene or harmful to juveniles on the internet” must verify that anyone attempting to view that material is at least 18 years old.

The bill also states that such organizations must “utilize a geofence system maintained and monitored by a licensed location-based technology provider to dynamically monitor the geolocation of persons.”

Existing Ohio law defines material harmful to juveniles as “any material or performance describing or representing nudity, sexual conduct, sexual excitement, or sado-masochistic abuse” that “appeals to the prurient interest of juveniles in sex,” is “patently offensive to prevailing standards in the adult community as a whole with respect to what is suitable for juveniles,” and “lacks serious literary, artistic, political, and scientific value for juveniles.”

Under the new law, online distributors of “material harmful to juveniles” that don’t comply with the age check requirement could face civil actions initiated by Ohio’s attorney general.

Supporters of the law portrayed it as a way to stop young Ohioans from being able to access online porn entirely. But the biggest purveyors of online porn—including Pornhub and similar platforms, which allow users to upload as well as view content—seem to be exempt from the law.

Among the organizations exempted from age verification requirements are providers of “an interactive computer service,” which is defined by Ohio lawmakers as having the same meaning as it does under federal law.

The federal law that defines “interactive computer service”—Section 230 of the Communications Decency Act—says it “means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”

That’s a bit of a mouthful, but we have decades of jurisprudence parsing that definition. And it basically means any platform where third parties can create accounts and can generate content, from social media sites to dating apps, message boards, classified ads, search engines, comment sections, and much more.

Platforms like Pornhub unambiguously fall within this category.

In fact, Pornhub is not blocking Ohio users as it has in most other states with age verification laws for online porn, because its parent company, Aylo, does not believe the law applies to it.

“As a provider of an ‘interactive computer service’ as defined under Section 230 of the Communications Decency Act, it is our understanding that we are not subject to the obligations under section 1349.10 of the Ohio Revised Code regarding mandated age verification for the ‘interactive computer services’ we provide, such as Pornhub,” Aylo told Mashable.

Keep reading

New York Wants Online Digital ID Rules for Social Media Feeds Under “SAFE For Kids Act”

New York is advancing a set of proposed regulations that would require social media platforms to verify users’ ages before granting access to algorithm-driven feeds or allowing nighttime alerts.

Attorney General Letitia James introduced the draft rules on Monday, tied to the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, which was signed into law last year by Governor Kathy Hochul.

Presented as part of an effort to reduce mental health harms linked to social media, the law would compel platforms to restrict algorithmic content for anyone under 18 or anyone who hasn’t completed an age verification process, which would mean the introduction of digital ID checks to access online platforms.

In those cases, users would be limited to seeing content in chronological order from accounts they already follow.

Platforms would also be barred from sending notifications between 12 a.m. and 6 a.m. to those users.

The rules give companies some flexibility in how they confirm a user’s age, as long as the method is considered effective and designed to protect personal data.

Acceptable alternatives to submitting a government ID include facial analysis that estimates age. Any identifying information collected during verification must be deleted “immediately,” according to the proposal.

For minors to access personalized algorithmic feeds, parental permission would be required.

That too involves a verification step, with the same data-deletion requirements in place once the process is complete.

The SAFE For Kids Act targets platforms where user-generated content is central and where at least 20 percent of time spent involves engagement with feeds tailored to user behavior or device data.

Keep reading

Australia enforces world’s harshest social media age crackdown

Australia is introducing the world’s toughest rules to keep children off social media, with platforms facing fines of up to $49.5 million if they fail to detect and remove underage users.

From December 10, social media companies must actively identify and deactivate accounts belonging to users under 16, block re-registration attempts, and provide proper appeals processes. Communications Minister Anika Wells has unveiled a list of “reasonable steps” platforms such as TikTok, Snapchat, Instagram, Facebook and YouTube must follow.

The measures demand that age assurance technology not be a “set-and-forget” system and cannot rely solely on self-declaration. Platforms are encouraged to adopt a layered or “waterfall approach” using multiple checks across the user experience to detect underage accounts. They must also remove existing accounts “with care and clear communication” and provide accessible review options for those who believe they were wrongly flagged.

Wells and controversial eSafety commissioner Julie Inman Grant will present the guidance directly to tech companies during a visit to the United States later this month. After trials proved the technology exists to meet the requirements, Wells said there is no excuse for companies to fall short.

Keep reading

Von der Leyen Unveils New EU Censorship Push, Online Digital ID Plans, in 2025 State of the Union Speech

European Commission President Ursula von der Leyen used her 2025 State of the Union speech to unveil a raft of new regulatory measures that introduce new challenges for digital rights and freedom of expression across the continent and the world.

Framed as measures for public health, democracy, and child protection, the Commission is pushing the EU deeper into institutionalized censorship and online regulation.

Addressing the European Parliament, von der Leyen declared she is “appalled by the disinformation that threatens global progress on everything from measles to polio.”

Citing fears of a global health crisis, she introduced a “Global Health Resilience Initiative,” which she said the EU would lead.

This initiative is expected to tie online speech more tightly to global health narratives, laying the groundwork for broader suppression of dissenting views under the label of medical misinformation.

Another centerpiece of her address was the so-called “European Democracy Shield,” a program that we’ve covered in great detail, intended to streamline and centralize the Commission’s censorship machinery under the banner of fighting “foreign information manipulation and interference.”

Framing the internet as a battlefield, she said: “Our democracy is under attack. The rise in information manipulation and disinformation is dividing our societies.”

Expanding on that framework, she announced the creation of a new institution, the European Centre for Democratic Resilience.

According to von der Leyen, this center will allow the EU to scale up its ability “to monitor and detect information manipulation and disinformation.”

But the agenda didn’t stop there. She introduced the Media Resilience Program, which she claimed would support “independent journalism and media literacy.”

In practice, however, such efforts often result in government-approved messaging being amplified, while dissenting outlets don’t get funded.

Von der Leyen pointed to declining local journalism in rural communities and claimed: “This has created many news deserts where disinformation thrives…This is why we will launch a new Media Resilience Program – it will support independent journalism and media literacy.”

Despite the existing Digital Services Act already mandating age verification (and therefore digital ID) online, von der Leyen floated a new, even more restrictive direction for internet access among young people.

Keep reading

Australia Orders Tech Giants to Enforce Age Verification Digital ID by December 10

Australia is preparing to enforce one of the most invasive online measures in its history under the guise of child safety.

With the introduction of mandatory age verification across social media platforms, privacy advocates are warning that the policy, set to begin December 10, 2025, risks eroding fundamental digital rights for every user, not just those under 16.

eSafety Commissioner Julie Inman Grant has told tech giants like Google, Meta, TikTok, and Snap that they must be ready to detect and shut down accounts held by Australians under the age threshold.

She has made it clear that platforms are expected to implement broad “age assurance” systems across their services, and that “self-declaration of age will not, on its own, be enough to constitute reasonable steps.”

The new rules stem from the Online Safety Amendment (Social Media Minimum Age) Act 2024, which gives the government sweeping new authority to dictate how users verify their age before accessing digital services. Any platform that doesn’t comply could be fined up to $31M USD.

While the government claims the law isn’t a ban on social media for children under 16, in practice, it forces platforms to block these users unless they can pass age checks, which means a digital ID.

There will be no penalties for children or their parents, but platforms face immense legal and financial pressure to enforce restrictions, pressure that almost inevitably leads to surveillance-based systems.

The Commissioner said companies must “detect and de-activate these accounts from 10 December, and provide account holders with appropriate information and support before then.”

These expectations extend to providing “clear, age-appropriate communications” and making sure users can download their data and find emotional or mental health resources when their accounts are terminated.

She further stated that “efficacy will require layered safety measures, sometimes known as a ‘waterfall approach’,” a term often associated with collecting increasing amounts of personal data at multiple steps of user interaction.

Such layered systems often rely on facial scanning, government ID uploads, biometric estimation, or AI-powered surveillance tools to estimate age.

Keep reading

Should the Government Restrict ‘Harmful’ Speech Online?

The First Amendment prohibits the federal government from suppressing speech, including speech it deems “harmful,” yet lawmakers keep trying to regulate online discourse.

Over the summer, the Senate passed the Kids Online Safety Act (KOSA), a bill to allegedly protect children from the adverse effects of social media. Senate Majority Leader Chuck Schumer took procedural steps to end the debate and quickly advance the bill to a floor vote. According to Schumer, the situation was urgent. In his remarks, he focused on the stories of children who were targets of bullying and predatory conduct on social media. To address these safety issues, the proposed legislation would place liability on online platforms, requiring them to take “reasonable” measures to prevent and mitigate harm.

It’s now up to the House to push the bill forward to the President’s desk. After initial concerns about censorship, the House Committee on Energy and Commerce advanced the bill in September, paving the way for a final floor vote.

KOSA highlights an ongoing tension between free speech and current efforts to make social media “safer.” In its persistent attempts to remedy social harm, the government shrinks what is permissible to say online and assumes a role that the First Amendment specifically guards against.

At its core, the First Amendment is designed to protect freedom of speech from government intrusion. Congress is not responsible for determining what speech is permissible or what information the public has the right to access. Courts have long held that all speech is protected unless it falls within certain categories. Prohibitions against harmful speech—where “harmful” is determined solely by lawmakers—are not consistent with the First Amendment.

But bills like KOSA add layers of complexity. First, the government is not simply punishing ideological opponents or those with unfavorable viewpoints, which would clearly violate the First Amendment. When viewed in its best light, KOSA is equally about protecting children and their health. New York had similar public health and safety justifications for its controversial hate speech law, which was blocked by a district court and is pending appeal. Under this argument, which is often cited to rationalize speech limitations, the dangers to society are so great that the government should take action to protect vulnerable groups from harm. However, the courts have generally ruled that this is not sufficient justification to limit protected speech.

In American Booksellers Association v. Hudnut (1986), Judge Frank Easterbrook evaluated the constitutionality of a pornography prohibition enacted by the City of Indianapolis. The city reasoned that pornography has a detrimental impact on society because it influences attitudes and leads to discrimination and violence against women. As Judge Easterbrook wrote in his now-famous opinion, just because speech has a role in social conditioning or contributes loosely to social harm does not give the government license to control it. Such content is still protected, however harmful or insidious, and any answer to the contrary would allow the government to become the “great censor and director of which thoughts are good for us.”

In addition to the protecting children argument, a second layer of complexity is that KOSA enables censorship through roundabout means. The government accomplishes what it is barred from doing under the First Amendment by requiring online platforms to police a vast array of harms or risk legal consequences. This is a common feature of recent social media bills, which place the responsibility on platforms.

Practically, the result is inevitably less speech. Under KOSA, the platform has a “duty of care” to mitigate youth anxiety, depression, eating disorders, and addiction-like behaviors. While this provision focuses on the covered entity’s design and operation, it necessarily implicates speech since social media platforms are built around user-generated posts, from content curation to notifications. Because platforms are liable for falling short of the “duty of care,” this requirement is bound to sweep up millions of posts that are protected speech, even ordinary content that may trigger the enumerated harm. While the platform would technically be the entity implementing these policies, the government would be driving content removal.

Keep reading

Brazil Uses Child Safety as Cover for Online Digital ID Surge

Brazil’s Chamber of Deputies has advanced a bill marketed as a child protection measure, drawing sharp condemnation from lawmakers who say the process ignored legislative rules and opens the door to broad censorship of online content.

Bill PL 2628/2022, which outlines mandatory rules for digital platforms operating in Brazil, moved forward at an unusually fast pace after Chamber President Hugo Motta approved an urgency request on August 19.

That decision cut off critical steps in the legislative process, including committee review and broader debate, allowing the proposal to reach the full floor for a vote just one day later.

The urgency motion, Requerimento de Urgência REQ 1785/2025, passed without a roll-call vote. Instead, Motta used a symbolic vote, a method that records no individual positions and relies on the presiding officer’s perception of consensus. Requests for a formal, recorded vote were rejected outright.

Congressman Marcel van Hattem (NOVO-RS) accused the Chamber’s leadership of bypassing democratic norms. He said Motta approved the urgency request to expand the “censorship” of the Lula government.

Other deputies joined the protest, calling the process arbitrary and abusive.

Under the bill, digital platforms must verify users’ ages, take down material labeled offensive to minors, and comply with orders from a newly created federal oversight authority.

That body would hold sweeping powers to enforce regulations, issue sanctions, and even suspend platforms for up to 30 days in some circumstances, potentially without a full court decision.

Although the urgent request had been filed back in May, it gained renewed traction after social media influencer Felca released a series of videos exposing what he called the “adultization” of children online. His content prompted widespread media coverage and pushed the topic of online child safety to the forefront. In response, Motta committed to fast-tracking related legislation.

Keep reading

US Plan To Copy UK’s Disastrous Online Digital ID Verification Is Winning Friends in the Senate

The Kids Online Safety Act (KOSA) is moving forward in the US Senate with 16 new co-sponsors as of July 31, 2025, reviving a proposal that copies the same type of provision found in the UK’s controversial Online Safety Act, which has caused much backlash across the Atlantic.

In Britain, that measure forces online platforms to implement digital ID age checks before granting access to content deemed “harmful,” a policy that has caused intense resentment over privacy violations, the erosion of anonymity, and government overreach in the realm of free speech.

Now, US lawmakers are considering a similar framework, with more senators from both parties throwing their support behind the bill in recent weeks.

Marketed as a way to shield children from harmful online material, KOSA has gained prominent backing from Apple, which has publicly praised it as a step toward improving online safety. Yet beyond the reassuring branding, the legislation contains provisions that raise serious concerns for free expression and user privacy.

If enacted, the bill would give the Federal Trade Commission authority to investigate and sue platforms over content labeled as “harmful” to minors. This would push websites toward aggressive content moderation to avoid liability, creating an environment where speech is heavily filtered without the government ever issuing direct censorship orders.

The legislation also instructs the Secretary of Commerce, FTC, and FCC to explore “systems to verify age at the device or operating system level.” Such a mandate paves the way for nationwide digital identification, where every user’s online activity could be tied to a verifiable real-world identity.

Once anonymity is removed, the scope for surveillance and profiling expands dramatically, with personal data stored and potentially exploited by both corporations and government agencies.

Advocates of a free and open internet warn that laws like KOSA exploit the emotional appeal of child safety to introduce infrastructure that enables ongoing monitoring and identity tracking. Even with recent changes, such as removing state attorneys general from enforcement, these core concerns remain.

Senator Marsha Blackburn defended the bill, stating, “Big Tech platforms have shown time and time again they will always prioritize their bottom line over the safety of our children.” Yet KOSA’s structure could end up reinforcing the dominance of large tech firms, which are best positioned to implement costly verification systems and handle the resulting data.

The bill’s earlier version stalled in the House after leadership, including Speaker Mike Johnson, questioned its impact on free speech. Johnson remarked that he “love[s] the principle, but the details of that are very problematic,” a sentiment still shared by many who view KOSA as a gateway to lasting restrictions on online freedoms.

If this legislation moves forward, it will not simply affect what minors can view; it will alter the fundamental architecture of the internet, embedding identity verification and top-down content control into its design.

Keep reading

Age-Restricted Taxi Tracking? The Absurd Consequences Of Britain’s Online Safety Act

I was recently travelling in the UK and, after a lot of sightseeing on foot, decided to order a taxi to go back to my hotel.

I searched the internet for a local taxi firm and found one with relative ease. I called the number and went through an automated process which worked well. I managed to book a taxi quickly. The computer-generated voice told me that my taxi was on its way. I was sent a link so that I could monitor the progress of my taxi. The message also said that I would know the taxi driver’s name and the type of vehicle and registration number that was on its way….

I can’t understand why anyone would consider a link to show you the progress of a taxi that you have ordered to be age-inappropriate content.

I can only assume that it is to do with the recent Online Safety Act, although coincidentally I had recently changed mobile providers, so it might purely have been that the mobile provider that I’d switched to had a different standard as to what was considered adult content.

I doubt this on the basis that the company I moved to, Talkmobile, is a wholly owned subsidiary of the company I had used previously, Vodafone, and, as you can see, the block was from Vodafone.

Whoever has decided that this link contains age-restricted content hasn’t necessarily thought this through.

Consider the scenario where a 17 year-old girl can’t get hold of her parents and it’s too far away or she does not want to walk home, so she orders a taxi through a reputable taxi service.

A link is sent to her so she can see the progress of the taxi that she has ordered.

Of course, she can’t open it because it’s considered age-inappropriate and, being only 17, she’s not in a position to prove that she’s over 18 and thus get the link to the taxi.

Thankfully it’s rare, but we do know that there are predators out there who will look for people who are vulnerable, and it’s not difficult to spot someone who’s waiting for somebody to pick them up or waiting for a taxi, because every time a car approaches the person will look up from whatever they’re doing to see if it’s the car that’s picking them up.

All it would take would be for a predator to be around at that time, pull the window down and say, “Did you call for a taxi?” and, of course, because she’s just ordered one, she believes this is her taxi, so she gets in, perhaps never to be seen again — all because some moron has decided that a link to follow the progress of a taxi is something you’re not allowed to see if you’re under the age of 18.

Keep reading

DIGITAL ID: The Shocking Plan to Kill Free Speech Forever

The U.S. is on the verge of launching a dystopian online surveillance machine—and disturbingly, Republicans are helping make it law.

The SCREEN Act and KOSA claim to protect kids, but they’re Trojan horses. If passed, every American adult would be forced to verify their ID to access the internet—just like in Australia, where “age checks” morphed into speech policing. In the UK, digital ID is already required for jobs, housing, and healthcare.

This is how they silence dissent: by tying your identity to everything you read, say, or buy online.

The trap is nearly shut. Once it locks in, online freedom vanishes forever.

Will Americans wake up before it’s too late? Watch Maria Zeee expose the full blueprint—and how little time we have left.

Keep reading