Here’s PROOF That UK’s X Ban Has NOTHING To Do With Protecting Children

As UK authorities ramp up their assault on free speech, a viral post shared by Elon Musk exposes the glaring hypocrisy in the government’s “protect the children” narrative. Data from the The National Society for the Prevention of Cruelty to Children (NSPCC) and police forces reveals Snapchat as the epicenter of online child sexual grooming, dwarfing X’s minimal involvement.

This comes amid Keir Starmer’s escalating war on X, where community notes routinely dismantle government spin, and unfiltered truth is delivered to the masses. If safeguarding kids was the real goal, it would be the likes of Snapchat in the crosshairs, given that thousands of real world child sexual offences have originated from its use.

Instead they’re going after X because, they claim, it provides the ability to make fake images of anyone in a bikini using the inbuilt Grok Ai image generator.

Based on 2025 NSPCC and UK police data, Snapchat is linked to 40-48% of identified child grooming cases, Instagram around 9-11%, Facebook 7-9%, WhatsApp 9%, and X under 2%.

These numbers align with NSPCC’s alarming report on the surge in online grooming. The charity recorded over 7,000 Sexual Communication with a Child offences in 2023/24—an 89% spike since 2017/18.

Keep reading

UK to bring into force law to tackle Grok AI deepfakes this week

The UK will bring into force a law which will make it illegal to create non-consensual intimate images, following widespread concerns over Elon Musk’s Grok AI chatbot.

Technology Secretary Liz Kendall said the government would also seek to make it illegal for companies to supply the tools designed to create such images.

Speaking to the Commons, Kendall said AI-generated pictures of women and children in states of undress, created without a person’s consent, were not “harmless images” but “weapons of abuse”.

The BBC has approached X for comment. It previously said: “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

It comes hours after Ofcom announced it was launching an investigation into X over “deeply concerning reports” about Grok altering images of people.

If found to have broken the law, Ofcom can potentially issue X with a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater.

And if X does not comply, Ofcom can seek a court order to force internet service providers to block access to the site in the UK altogether.

In a statement, Kendall urged the regulator not to take “months and months” to conclude its investigation, and demanded it set out a timeline “as soon as possible”.

It is currently illegal to share deepfakes of adults in the UK, but legislation in the Data (Use and Access) Act which would make it a criminal offence to create or request them has not been enforced until now, despite passing in June 2025.

Last week, campaigners accused the government of dragging its heels on implementing that law.

“Today I can announce to the House that this offence will be brought into force this week,” Kendall told MPs.

In addition to the Data Act, Kendall said she would also make it a “priority offence” in the Online Safety Act.

“The content which has circulated on X is vile. It’s not just an affront to decent society, it is illegal,” she said.

“Let me be crystal clear – under the Online Safety Act, sharing intimate images of people without their consent, or threatening to share them, including pictures of people in their underwear, is a criminal offence for individuals and for platforms.

“This means individuals are committing a criminal offence if they create or seek to create such content including on X, and anyone who does this should expect to face the full extent of the law.”

Keep reading

Starmer’s Looking for an Excuse to Ban X

Keir Starmer has signaled he is prepared to back regulatory action that could ultimately result in X being blocked in the UK.

The Prime Minister of the United Kingdom has suggested, more or less, that because Elon Musk’s AI chatbot Grok has been generating images of women and minors in bikinis, he’ll support going as far as hitting the kill switch and blocking access to the entire platform.

“The situation is disgraceful and disgusting,” Starmer said on Greatest Hits Radio; the station best known for playing ABBA and now, apparently, for frontline authoritarian tech policy announcements.

“X has got to get a grip of this, and Ofcom has our full support to take action…I’ve asked for all options to be on the table.”

“All options,” for those who don’t speak fluent Whitehall euphemism, now apparently includes turning Britain’s digital infrastructure into a sort of beige North Korea, where a bunch of government bureaucrats, armed with nothing but Online Safety Act censorship law and the panic of a 90s tabloid, get to decide which speech the public is allowed to see.

Now, you might be wondering: Surely he’s bluffing? Oh no. According to Downing Street sources, they’re quite serious.

And they’ve even named the mechanism: the Online Safety Act; that cheery little piece of legislation that sounds like it’s going to help grandmothers avoid email scams, but actually gives Ofcom the power to block platforms, fine them into oblivion, or ban them entirely if they don’t comply with government censorship orders.

Killing X isn’t a new idea. You may remember Morgan McSweeney, Keir Starmer’s Chief of Staff, founded the Centre for Countering Digital Hate. In 2024, leaks revealed that the group was trying to “Kill Musk’s Twitter.”

Keep reading

Virginia to Enforce Verification Law for Social Media on January 1, 2026, Despite Free Speech Concerns

Virginia is preparing to enforce a new online regulation that will curtail how minors access social media, setting up a direct clash between state lawmakers and advocates for digital free expression.

Beginning January 1, 2026, a law known as Senate Bill 854 will compel social media companies to confirm the ages of all users through “commercially reasonable methods” and to restrict anyone under sixteen to one hour of use per platform per day.

We obtained a copy of the bill for you here.

Parents will have the option to override those limits through what the statute calls “verifiable parental consent.”

The measure is written into the state’s Consumer Data Protection Act, and it bars companies from using any information gathered for age checks for any other purpose.

Lawmakers from both parties rallied behind the bill, portraying it as a way to reduce what they described as addictive and harmful online habits among young people.

Delegate Wendell Walker argued that social media “is almost like a drug addiction,” while Delegate Sam Rasoul said that “people are concerned about the addiction of screen time” and accused companies of building algorithms that “keep us more and more addicted.”

Enforcement authority falls to the Office of the Attorney General, which may seek injunctions or impose civil fines reaching $7,500 per violation for noncompliance.

But this policy, framed as a health measure, has triggered strong constitutional objections from the technology industry and free speech advocates.

The trade association NetChoice filed a federal lawsuit (NetChoice v. Miyares) in November 2025, arguing that Virginia’s statute unlawfully restricts access to lawful speech online.

We obtained a copy of the lawsuit for you here.

The complaint draws parallels to earlier moral panics over books, comic strips, rock music, and video games, warning that SB 854 “does not enforce parental authority; it imposes governmental authority, subject only to a parental veto.”

Keep reading

UK Lawmakers Propose Mandatory On-Device Surveillance and VPN Age Verification

Lawmakers in the United Kingdom are proposing amendments to the Children’s Wellbeing and Schools Bill that would require nearly all smartphones and tablets to include built-in, unremovable surveillance software.

The proposal appears under a section titled “Action to promote the well-being of children by combating child sexual abuse material (CSAM).”

We obtained a copy of the proposed amendments for you here.

The amendment text specifies that any “relevant device supplied for use in the UK must have installed tamper-proof system software which is highly effective at preventing the recording, transmitting (by any means, including livestreaming) and viewing of CSAM using that device.”

It further defines “relevant devices” as “smartphones or tablet computers which are either internet-connectable products or network-connectable products for the purposes of section 5 of the Product Security and Telecommunications Infrastructure Act 2022.”

Under this clause, manufacturers, importers, and distributors would be legally required to ensure that every internet-connected phone or tablet they sell in the UK meets this “CSAM requirement.”

Enforcement would occur “as if the CSAM requirement was a security requirement for the purposes of Part 1 of the Product Security and Telecommunications Infrastructure Act 2022.”

In practical terms, the only way for such software to “prevent the recording, transmitting (by any means, including livestreaming) and viewing of CSAM” would be for devices to continuously scan and analyze all photos, videos, and livestreams handled by the device.

That process would have to take place directly on users’ phones and tablets, examining both personal and encrypted material to determine whether any of it might be considered illegal content. Although the measure is presented as a child-safety protection, its operation would create a system of constant client-side scanning.

This means the software would inspect private communications, media, and files on personal devices without the user’s consent.

Such a mechanism would undermine end-to-end encryption and normalize pre-emptive surveillance built directly into consumer hardware.

The latest figures from German law enforcement offer a clear warning about the risks of expanding this type of surveillance: in 2024, nearly half of all CSAM scanning tips received by Germany were errors.

Keep reading

House Lawmakers Unite in Moral Panic, Advancing 18 “Kids’ Online Safety” Bills That Expand Surveillance and Weaken Privacy

The House Energy & Commerce Subcommittee on Commerce, Manufacturing, and Trade spent its latest markup hearing on Thursday proving that if there’s one bipartisan passion left in Washington, it’s moral panic about the internet.

Eighteen separate bills on “kids’ online safety” were debated, amended, and then promptly advanced to the full committee. Not one was stopped.

Ranking Member Jan Schakowsky (D) set the tone early, describing the bills as “terribly inadequate” and announcing she was “furious.”

She complained that the package “leaves out the big issues that we are fighting for.” If it’s not clear, Schakowsky is complaining that the already-controversial bills don’t go far enough.

Eighteen bills now move forward, eight of which hinge on some form of age verification, which would likely require showing a government ID. Three: App Store Accountability (H.R. 3149), the SCREEN Act (H.R. 1623), and the Parents Over Platforms Act (H.R. 6333), would require it outright.

The other five rely on what lawmakers call the “actual knowledge” or “willful disregard” standards, which sound like legalese but function as a dare to platforms: either know everyone’s age, or risk a lawsuit.

The safest corporate response, of course, would be to treat everyone as a child until they’ve shown ID.

Keep reading

Lawmakers To Consider 19 Bills for Childproofing the Internet

Can you judge the heat of a moral panic by the number of bills purporting to solve it? At the height of human trafficking hysteria in the 2010s, every week seemed to bring some new measure meant to help the government tackle the problem (or at least get good press for the bill’s sponsor). Now lawmakers have moved on from sex trafficking to social media—from Craigslist and Backpage to Instagram, TikTok, and Roblox. So here we are, with a House Energy and Commerce subcommittee hearing on 19 different kids-and-tech bills scheduled for this week.

The fun kicks off tomorrow, with legislators discussing yet another version of the Kids Online Safety Act (KOSA)—a dangerous piece of legislation that keeps failing but also refuses to die. (See some of Reason‘s previous coverage of KOSA herehere, and here.)

The new KOSA no longer explicitly says that online platforms have a “duty of care” when it comes to minors—a benign-sounding term that could have chilled speech by requiring companies to somehow protect minors from a huge array of “harms,” from anxiety and depression to disordered eating to spending too much time online. But it still essentially requires this, saying that covered platforms must “establish, implement, maintain, and enforce reasonable policies, practices, and procedure” that address various harms to minors, including threats, sexual exploitation, financial harm, and the “distribution, sale, or use of narcotic drugs, tobacco products, cannabis products, gambling, or alcohol.” And it would give both the states and the Federal Trade Commission the ability to enforce this requirement, declaring any violation an “unfair or deceptive” act that violates the Federal Trade Commission Act.

Despite the change, KOSA’s core function is still “to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to” some harm, as Joe Mullin wrote earlier this year about a similar KOSA update in the Senate.

Language change or not, the bill would still compel platforms to censor a huge array of content out of fear that the government might decide it contributed to some vague category of harm and then sue.

KOSA is bad enough. But far be it for lawmakers to stop there.

Keep reading

Nebraska Attorney General Calls Marijuana A ‘Poison’ And Says People Who Buy It From A Tribe Within The State Do So ‘At Their Own Peril’

The attorney general of Nebraska says people who buy marijuana under a Native American tribe’s planned legal market on its reservation within the state do so “at their own peril,” implying enforcement action against citizens for purchasing what he described as a “poison” if they take it beyond the territory’s borders.

During a press conference focused on an unrelated executive order, Gov. Jim Pillen (R) and Attorney General Mike Hilgers (R) were asked about ongoing negotiations with the Omaha Tribe of Nebraska over a tobacco tax compact and the tribe’s move to legalize cannabis within the prohibitionist state.

“I think that my position is crystal clear. I’m totally opposed in recreational marijuana,” the governor said. “If the Omaha tribe progresses to that extent, my view is really simple: There’s not going to be Nebraskans going into the Omaha buying recreational marijuana. We’ll take whatever steps it is to keep our state values and keep that from happening.”

Hilgers, the state attorney general, also spoke about the tribe’s cannabis program alongside the governor, as well as during a separate press briefing on Wednesday.

While compacts between the state and tribal governments can be “good” for both parties, he said what the Omaha tribe has proposed is both a usurpation of tax revenue from tobacco sales and a willful defiance of state laws around marijuana.

Keep reading

39 Bipartisan State And Territory Attorneys General Push Congress To Ban Intoxicating Hemp Products

A bipartisan coalition of 39 state and territory attorneys general is calling on Congress to clarify the federal definition of hemp and impose regulations preventing the sale of intoxicating cannabinoid products.

In a letter sent to the Republican chairs of the House and Senate Appropriations and Agriculture Committees on Friday, members of the National Association of Attorneys General (NAAG) expressed concerns with provisions of the 2018 Farm Bill that legalized hemp, which they said has been “wrongly exploited by bad actors to sell recreational synthetic THC products across the country.”

They’re asking that lawmakers leverage the appropriations process, or the next iteration of the Farm Bill, to enact policy changes that “leave no doubt that these harmful products are illegal and that their sale and manufacture are criminal acts.”

Arkansas Attorney General Tim Griffin (R), Connecticut Attorney General William Tong (D), Indiana Attorney General Todd Rokita (R) and Minnesota Attorney General Keith Ellison (D) led the letter, underscoring the bipartisan sentiment driving the call for congressional action.

“Intoxicating hemp-derived THC products have inundated communities throughout our states due to a grievously mistaken interpretation of the 2018 Farm Bill’s definition of ‘hemp’ that companies are leveraging to pursue profits at the expense of public safety and health,” they wrote. “Many of these products—created by manufacturers by manipulating hemp to produce synthetic THC—are more intoxicating and psychoactive than marijuana a Schedule I controlled substance and are often marketed to minors.”

While the debate over revising federal hemp laws has been a consistent talking point this year, with attempts in both chambers to enact a ban on products containing THC, so far such restrictions have only been implemented at the state level.

“Unless Congress acts, this gross distortion of the 2018 Farm Bill’s hemp provision will continue to fuel the rapid growth of an under-regulated industry that threatens public health and safety and undermines law enforcement nationwide,” the letter says.

Keep reading

Whoops—Ohio Accidentally Excludes Most Major Porn Platforms From Anti-Porn Law

Remember when people used to say “Epic FAIL”? I’m sorry, but there’s no other way to describe Ohio’s new age verification law, which took effect on September 30.

A variation on a mandate that’s been sweeping U.S. statehouses, this law requires online platforms offering “material harmful to juveniles”—by which authorities mean porn—to check photo IDs or use “transactional data” (such as mortgage, education, and employment records) to verify that all visitors are adults.

But lawmakers have written the law in such a way that it excludes most major porn publishing platforms.

“This is why you don’t rush [age verification] bills into an omnibus,” commented the Free Speech Coalition’s Mike Stabile on Bluesky.

Ohio Republican lawmakers introduced a standalone age verification bill back in February, but it languished in a House committee. A similar bill introduced in 2024 also failed to advance out of committee.

The version that wound up passing this year did so as part of the state’s omnibus budget legislation (House Bill 96). This massive measure—more than 3,000 pages—includes a provision that any organization that “disseminates, provides, exhibits, or presents any material or performance that is obscene or harmful to juveniles on the internet” must verify that anyone attempting to view that material is at least 18 years old.

The bill also states that such organizations must “utilize a geofence system maintained and monitored by a licensed location-based technology provider to dynamically monitor the geolocation of persons.”

Existing Ohio law defines material harmful to juveniles as “any material or performance describing or representing nudity, sexual conduct, sexual excitement, or sado-masochistic abuse” that “appeals to the prurient interest of juveniles in sex,” is “patently offensive to prevailing standards in the adult community as a whole with respect to what is suitable for juveniles,” and “lacks serious literary, artistic, political, and scientific value for juveniles.”

Under the new law, online distributors of “material harmful to juveniles” that don’t comply with the age check requirement could face civil actions initiated by Ohio’s attorney general.

Supporters of the law portrayed it as a way to stop young Ohioans from being able to access online porn entirely. But the biggest purveyors of online porn—including Pornhub and similar platforms, which allow users to upload as well as view content—seem to be exempt from the law.

Among the organizations exempted from age verification requirements are providers of “an interactive computer service,” which is defined by Ohio lawmakers as having the same meaning as it does under federal law.

The federal law that defines “interactive computer service”—Section 230 of the Communications Decency Act—says it “means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”

That’s a bit of a mouthful, but we have decades of jurisprudence parsing that definition. And it basically means any platform where third parties can create accounts and can generate content, from social media sites to dating apps, message boards, classified ads, search engines, comment sections, and much more.

Platforms like Pornhub unambiguously fall within this category.

In fact, Pornhub is not blocking Ohio users as it has in most other states with age verification laws for online porn, because its parent company, Aylo, does not believe the law applies to it.

“As a provider of an ‘interactive computer service’ as defined under Section 230 of the Communications Decency Act, it is our understanding that we are not subject to the obligations under section 1349.10 of the Ohio Revised Code regarding mandated age verification for the ‘interactive computer services’ we provide, such as Pornhub,” Aylo told Mashable.

Keep reading