Missouri Locks the Web Behind a “Harmful” Content ID Check

Starting November 30, 2025, people in Missouri will find the digital world reshaped: anyone wishing to visit websites containing “harmful” adult material will need to prove they are at least 18 years old by showing ID.

This new requirement marks Missouri’s entry into the growing group of US states adopting age verification laws for online content. Yet the move does more than restrict access; it raises serious questions about how much personal data people must surrender just to browse freely.

For many, that tradeoff is likely to make privacy tools like VPNs a near necessity rather than a choice.

The law defines its targets broadly. Any site or app where over one-third of the material is classified as “harmful to minors” must block entry until users confirm their age.

Those who do not comply risk penalties that can reach $10,000 a day, with violations categorized as “unfair, deceptive, fraudulent, or otherwise unlawful practices.”

To meet these standards, companies are permitted to check age through digital ID systems, government-issued documents such as driver’s licenses or passports, or existing transactional data that proves a person’s age.

Keep reading

European Parliament agrees on resolution calling for minimum age on social media

The European Parliament on Wednesday agreed on a resolution which calls for a default minimum age of 16 on social media to ensure “age-appropriate online engagement”.

According to a draft published in October, the legislation asked for “the establishment of a harmonised European digital age limit of 16 years old as the default threshold under which access to online social media platforms should not be allowed unless parents or guardians have authorised their children otherwise”.

It also called for a harmonised European digital age limit of 13, under which no minor could access social media platforms, and an age limit of 13 for video-sharing services and “AI companions”.

The Parliament resolution is not legally binding and does not set policy.

Keep reading

We Must Resist The Rise Of A Global Censorship Regime

The ordeal of Finnish Parliamentarian Päivi Räsänen, who just stood trial a third time – after being acquitted twice – for a 2019 tweet in which she simply shared a Scripture verse and her faith-based views on marriage and sexuality, is a warning to all who value the right to speak freely across the world.

When governments claim the power to police opinions, even peaceful expressions of faith can be dragged through the courts.

And now this promises to be a much more pervasive reality in Europe as a result of the 2022 Digital Services Act (DSA). Ahead of the European Union’s review of the DSA, 113 international experts committed to free speech wrote to the European Commission highlighting the law’s incompatibility with free expression, citing the possibility of worldwide takedown orders. Räsänen was a signatory to the letter, alongside a former vice president of Yahoo Europe, a former U.S. senator, and politicians, academics, lawyers, and journalists from around the globe.

The DSA gives the E.U. authority to enforce moderation of “illegal content” on platforms and search engines with over 45 million monthly users. It enables bureaucrats to control online speech at scale under the guise of “safety” and “protecting democracy.”

However, E.U. member states may have different definitions of illegal content. Thus, under the law, anything deemed illegal under the speech laws of any one E.U. member state could potentially be removed across all of Europe. That means the harshest censorship laws in Europe could soon govern the entire continent, and possibly the internet worldwide. And if platforms fail to comply, they face billions in fines, thus providing clear incentive to censor and none to promote free speech.

Late last month, the E.U. announced that Meta and TikTok will face fines of up to 6 percent of their global sales for accusations of violating the DSA on matters related to transparency. But the well-founded fear is that this law—which grants sweeping authority to European regulators to control online speech across such platforms—including X, YouTube, and Facebook—will enable the kind of censorship endured by Räsänen on a global scale.

Further, citizens in countries outside of the E.U., like the United States, are at risk of facing new levels of censorship, because the DSA applies to large online digital platforms and search engines accessed within the E.U. but that have a global presence. It explicitly states its extraterritorial applicability as it covers platforms used by people “that have their place of establishment or are located in the Union, irrespective of where the providers of those intermediary services [the platforms] have their place of establishment.”

Platforms are incentivized to adapt their international content moderation policies to E.U. censorship. If those platforms deem something “illegal” under E.U. rules, that content may be banned everywhere, even in countries with strong free speech protections.

Keep reading

EU Parliament Votes for Mandatory Digital ID and Age Verification, Threatening Online Privacy

The European Parliament has voted to push the European Union closer to a mandatory digital identification system for online activity, approving a non-binding resolution that endorses EU-wide age verification rules for social media, video platforms, and AI chatbots.

Though presented as a child protection measure, the text strongly promotes the infrastructure for universal digital ID, including the planned EU Digital Identity Wallet and an age verification app being developed by the European Commission.

Under the proposal, every user would have to re-identify themselves at least once every three months to continue using major platforms. Children under 13 would be banned entirely, and teenagers between 13 and 16 would require parental approval to participate online.

Keep reading

Chat Control 2.0: EU Moves Toward Ending Private Communication

Between the coffee breaks and the diplomatic niceties of Brussels bureaucracy, a quiet dystopian revolution might be taking place. On November 26, a roomful of unelected officials could nod through one of the most consequential surveillance laws in modern European history, without ever having to face the public.

The plan, politely titled EU Moves to End Private Messaging with Chat Control 2.0, sits on the agenda of the Committee of Permanent Representatives, or Coreper, a club of national ambassadors whose job is to prepare legislation for the European Council. This Wednesday, they may “prepare” it straight into existence.

According to MEP Martin Sonneborn, Coreper’s diplomats could be ready to endorse the European Commission’s digital surveillance project in secret.

It was already due for approval a week earlier before mysteriously vanishing from the schedule. Now it’s back, with privacy advocates watching like hawks who suspect the farmer’s got a shotgun.

The Commission calls Chat Control 2.0 a child-protection measure. The branding suggests moral urgency; the text suggests mass surveillance. The proposal would let governments compel messaging services such as WhatsApp or Signal to scan users’ messages before they’re sent.

Officials insist that the newest version removes mandatory scanning, which is a bit like saying a loaded gun is safer because you haven’t pulled the trigger yet.

Keep reading

The GRANITE ACT: Wyoming Bill Targets Foreign Censors With $10M Penalties

The first cannon shot in a new kind of free speech war came not from Washington or Silicon Valley, but from Cheyenne. Wyoming Representative Daniel Singh last week filed the Wyoming GRANITE Act.

The “Guaranteeing Rights Against Novel International Tyranny & Extortion Act,” passed, would make Wyoming the first state to let American citizens sue foreign governments that try to police what they say online.

The bill traces back to a blog post by attorney Preston Byrne, the same lawyer representing 4chan and Kiwi Farms in their battles against censorship-driven British regulators.

Byrne’s idea was simple: if the UK’s Ofcom or Brazil’s Alexandre de Moraes wanted to fine or threaten Americans over online speech, the US should hit back hard.

Exactly one month after that idea appeared on his blog, it’s now inked into Wyoming legislative paperwork.

Byrne said:

“This bill has a long way to go until it becomes a law, it’s got to make it through legislative services, then to Committee, and then get introduced on the floor for a vote, but the important thing is, the journey of this concept, the idea of a foreign censorship shield law which also creates a civil cause of action against foreign censors, into law has begun.”

That “journey” may be the kind of slow procedural trudge that usually kills most ideas in committee, but the intent here is anything but mild, and, with the growing threat of censorship demands from the UKBrazilEurope, and Australia, there is a lot of momentum here to fight back.

“For the first time, state legislators are moving to implement rules that will allow U.S. citizens to strike back, hard, against foreign countries that want to interfere with Americans’ civil rights online,” Byrne continued.

The Act would let American citizens and companies sue foreign governments or their agents for trying to censor them, and, crucially, it strips away the usual escape hatch of sovereign immunity.

In its legal filing responding to the 4chan and KiwiFarms lawsuit, Ofcom insisted it has “sovereign immunity” and told the court there were “substantial grounds” for throwing out the case on that basis.

The regulator’s lawyers framed Ofcom as a protected arm of the British state, immune from civil claims even when its decisions target a platform based entirely inside the United States.

Ofcom treats the idea of “sovereign immunity” as something substantial but the First Amendment as something that does not exist at all.

Keep reading

Top MAGA Influencers Accidentally Unmasked as Foreign Trolls

Elon Musk’s social media site X has rolled out a new feature in an effort to increase transparency—and unwittingly revealed that many of the site’s top MAGA influencers are actually foreign actors.

The new “About This Account” feature, which became available to X users on Friday, allows others to see where an account is based, when they joined the platform, how often they have changed their username, and how they downloaded the X app.

Upon rollout, rival factions began to inspect just where their online adversaries were really based on the combative social platform—with dozens of major MAGA and right-wing influencer accounts revealed to be based overseas.

“This is easily one of the greatest days on this platform,” wrote Democratic influencer Harry Sisson.

“Seeing all of these MAGA accounts get exposed as foreign actors trying to destroy the United States is a complete vindication of Democrats, like myself and many on here, who have been warning about this”.

Dozens of major accounts masquerading as “America First” or “MAGA” proponents have been identified as originating in places such as Russia, India, and Nigeria.

In one example, the account MAGANationX—with nearly 400,000 followers and a bio reading “Patriot Voice for We The People”—is actually based in Eastern Europe.

Keep reading

UK Government “Resist” Program Monitors Citizens’ Online Posts

Let’s begin with a simple question. What do you get when you cross a bloated PR department with a clipboard-wielding surveillance unit?

The answer, apparently, is the British Government Communications Service (GCS). Once a benign squad of slogan-crafting, policy-promoting clipboard enthusiasts, they’ve now evolved (or perhaps mutated) into what can only be described as a cross between MI5 and a neighborhood Reddit moderator with delusions of grandeur.

Yes, your friendly local bureaucrat is now scrolling through Facebook groups, lurking in comment sections, and watching your aunt’s status update about the “new hotel down the road filling up with strangers” like it’s a scene from Homeland. All in the name of “societal cohesion,” of course.

Once upon a time, the GCS churned out posters with perky slogans like Stay Alert or Get Boosted Now, like a government-powered BuzzFeed.

But now, under the updated “Resist” framework (yes, it’s actually called that), the GCS has been reprogrammed to patrol the internet for what they’re calling “high-risk narratives.”

Not terrorism. Not hacking. No, according to The Telegraph, the new public enemy is your neighbor questioning things like whether the council’s sudden housing development has anything to do with the 200 migrants housed in the local hotel.

It’s all in the manual: if your neighbor posts that “certain communities are getting priority housing while local families wait years,” this, apparently, is a red flag. An ideological IED. The sort of thing that could “deepen community divisions” and “create new tensions.”

This isn’t surveillance, we’re told. It’s “risk assessment.” Just a casual read-through of what that lady from your yoga class posted about a planning application. The framework warns of “local parental associations” and “concerned citizens” forming forums.

And why the sudden urgency? The new guidance came hot on the heels of a real incident, protests outside hotels housing asylum seekers, following the sexual assault of a 14-year-old girl by Hadush Kebatu, an Ethiopian migrant.

Now, instead of looking at how that tragedy happened or what policies allowed it, the government’s solution is to scan the reaction to it.

What we are witnessing is the rhetorical equivalent of chucking all dissent into a bin labelled “disinformation” and slamming the lid shut.

Keep reading

Court Filings Allege Meta Downplayed Risks to Children and Misled the Public

Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” 

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.

The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.  

“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.” 

The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffs—including children and parents, school districts, and state attorneys general—have joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. 

Keep reading

The Algorithm Accountability Act’s Threat to Free Speech

A new push in Congress is taking shape under the banner of “algorithmic accountability,” but its real effect would be to expand the government’s reach into online speech.

Senators John Curtis (R-UT) and Mark Kelly (D-AZ) have introduced the Algorithm Accountability Act, a bill that would rewrite Section 230 of the Communications Decency Act to remove liability protections from large, for-profit social media platforms whose recommendation systems are said to cause “harm.”

We obtained a copy of the bill for you here.

The proposal applies to any platform with more than a million users that relies on algorithms to sort or recommend content.

These companies would be required to meet a “duty of care” to prevent foreseeable bodily injury or death.

If a user or family member claims an algorithm contributed to such harm, the platform could be sued, losing the legal shield that has protected online speech for nearly three decades.

Although the bill’s authors describe it as a safety measure, the structure of the law would inevitably pressure platforms to suppress or downrank lawful content that might later be portrayed as dangerous.

Most major social networks already rely heavily on automated recommendation systems to organize and personalize information. Exposing them to lawsuits for what those systems display invites broad, quiet censorship under the guise of caution.

Keep reading