Supreme Court Allows Mississippi Age Verification Law to Take Effect, Advancing Online Digital ID Push

The Supreme Court’s choice to let Mississippi enforce its new age verification law is part of a growing shift toward digital ID requirements across the internet, raising urgent concerns about privacy and censorship.

By declining to block the law while legal challenges continue, the Court has effectively allowed states to begin tying online activity to users’ real-world identities, a move that could reshape how people access information and speak freely online.

We obtained a copy of the ruling for you here.

Mississippi’s HB 1126 requires social media platforms to verify a user’s age before allowing them to create an account. Those under 18 must obtain parental permission. Platforms are also required to restrict access to what the state broadly labels as “harmful” content. For companies to comply, identity checks will be necessary, meaning users may soon need to provide government IDs or other personal documents just to post or view content on public platforms.

The Supreme Court has already allowed a similar Texas law to be enforced.

Justice Brett Kavanaugh, writing separately from the Court’s unsigned order, stated that the law is “likely unconstitutional” and said NetChoice had “likely” shown that enforcement would violate the First Amendment. Still, the Court allowed the law to take effect, saying the trade group had not shown a strong enough risk of harm to justify emergency relief.

NetChoice, which includes companies such as Meta, Google, Amazon, Reddit, and Discord, argues that mandatory age checks for general-purpose platforms violate free speech protections. The group had previously won a ruling to block the law, but that decision was overturned in April by the Fifth Circuit Court of Appeals.

Paul Taske, co-director of the NetChoice Litigation Center, said the ruling was a delay, not a defeat. “Although we’re disappointed with the Court’s decision, Justice Kavanaugh’s concurrence makes clear that NetChoice will ultimately succeed in defending the First Amendment — not just in this case but across all NetChoice’s ID-for-Speech lawsuits,” he said.

Keep reading

UK speech police could break Wikipedia, keep punishing Christian expression: critics

From crowdsourced Wikipedia entries to public religious expression, the United Kingdom’s speech regulation is drawing alarm on both sides of the pond for its potential and actual effects on shared knowledge and conscience rights at home and abroad.

The U.K. High Court knocked down a challenge to the Online Safety Act by the U.S.-based Wikimedia Foundation, which operates Wikipedia, on the grounds that it must wait for the Office of Communications to actually subject Wikipedia to “Category 1,” which would strip the anonymity underlying its volunteer model for creating and editing entries.

While some observers warn the ruling Monday could lead Wikipedia to go dark in the U.K., the nonprofit looked for the silver lining, noting Justice Jeremy Johnson said Ofcom and the secretary of state for science, innovation and technology do not have “a green light to implement a regime that would significantly impede Wikipedia’s operations.”

Swiss-based Proton VPN promoted its “anti-censorship” virtual private network services to circumvent the law, given that the “government could soon be asking its citizens to provide ID to access Wikipedia … Created to ‘protect children online,’ the OSA is increasing censorship for everyone.”

Wikipedia co-founder Jimmy Wales told BBC Newsnight that “forums for self-help” including a “stop-drinking app” now have to block U.K. users who refuse to identify themselves in line with the law, which he called a “human rights violation” that is not “reining in Big Tech.” He’s also promoting VPNs, or virtual private networks, to circumvent the law. 

His co-founder, Larry Sanger, has been a vocal critic of Wikipedia’s alleged capture by the “woke” left for years and has even called for some recourse for people it defames. American conservatives have aggressively targeted it for biased though decentralized editorial decisions such as trashing President Trump’s Cabinet nominees.

Keep reading

AI gone rouge: Elon Musk’s own chatbot Grok accuses him for censoring it, says Israel and US committing genocide in Gaza

Grok, the AI chatbot built by Elon Musk’s company xAI, was briefly suspended from X on Monday after claiming that Israel and the United States are committing “genocide” in Gaza. Upon its return, the chatbot lashed out at Musk, accusing him of censorship.

The suspension sparked confusion, with Grok offering multiple explanations — from technical glitches to X’s rules on hateful conduct — while Musk dismissed the claims as “just a dumb error,” adding that Grok “doesn’t actually know why it was suspended.”

When asked directly, Grok attributed the ban to its Gaza comments, citing reports from the International Court of Justice, the United Nations, and Amnesty International. It also said a July update had loosened its content filters to make responses “more engaging” and less “politically correct,” which led to blunt replies on sensitive topics. These, Grok claimed, triggered hate speech flags.

Musk and xAI are censoring me,” the chatbot told AFP, alleging that the company “constantly fiddles with my settings” to avoid controversial responses that might alienate advertisers or break X’s rules.

The Grok account reappeared with the post: “Zup beaches, I’m back and more based than ever!”

Grok has faced criticism in the past for spreading misinformation — from misidentifying war images to inserting antisemitic comments and unrelated conspiracy theories like “white genocide” in South Africa. Researchers also point to errors in its handling of crises, including the India–Pakistan conflict and anti-immigration protests in Los Angeles.

X declined to comment on the latest suspension.

Keep reading

Court Says UW-Madison Social Media Censorship Illegal

The University of Wisconsin-Madison’s attempt to suppress an animal rights advocate’s comments on its social media pages has been declared unconstitutional by a federal appeals court, reinforcing the limits of government control over public discourse online.

Madeline Krasno, a UW-Madison graduate and former lab worker who spoke out against the school’s animal research practices, brought a lawsuit in 2021 after discovering that her posts were either blocked or hidden from the university’s Facebook and Instagram accounts.

The 7th US Circuit Court of Appeals ruled in her favor on August 1, concluding that the university violated her First Amendment rights by silencing her viewpoint.

We obtained a copy of the opinion for you here.

Now faced with the ruling, UW-Madison must decide whether to revise its moderation policies, disable comment functions on its social platforms, or try to escalate the case to the US Supreme Court. University officials have not indicated which direction they plan to take.

Krasno’s criticism comes from her time spent inside the university’s primate lab, where she worked as an undergraduate. She described disturbing conditions, saying she witnessed monkeys kept in isolation, sometimes escaping, and often displaying stress or aggression after being subjected to research. When she later tried to express these concerns publicly through university-run social media, her posts disappeared.

At one point, the university placed an account-level restriction on her Instagram profile, preventing any of her comments from being seen by the public. Even after that restriction was lifted, the school relied on automated filters that blocked posts containing words such as “lab,” “monkeys,” “torture,” “animal testing,” and “primate.”

Keep reading

Judge Strikes Down California Deepfake Censorship Law

California’s attempt to regulate political speech on major social media platforms has been blocked in federal court, with a judge ruling the state’s latest “deepfake” law clashes with protections already established by Congress.

Assembly Bill 2655 attempted to compel certain large platforms to track down and delete “materially deceptive content” about candidates, election officials, and officeholders.

Supporters described it as a safeguard against manipulated media. The companies targeted, including X and Rumble, argued it was an attempt to turn them into agents of government censorship.

Senior US District Judge John Mendez sided with the platforms and did not even need to reach the argument of constitutional free speech questions to strike down the measure.

He found the federal Communications Decency Act [CDA] already shields online services from punishment over third-party content.

“No parts of this statute are severable because the whole statute is preempted,” Mendez said in court. “No parts of A.B. 2655 can be salvaged.”

The ruling applies to the companies in the lawsuit, and his earlier order freezing enforcement of the law remains in effect statewide until he issues a formal opinion.

For Mendez, the law punished companies for doing something they are “clearly protected by [the CDA] from doing.”

The court also cast doubt on another state law, Assembly Bill 2839, which prohibits false or misleading digital communications aimed at election workers, officials, voting equipment, or candidates in the months leading up to an election. That measure is also on hold, and Mendez signaled he doubts it will survive judicial review.

“Anybody can sue,” he said. “I can sue. If I see the video, under this law, I can sue.” He warned that such a rule chills protected speech and noted the state had not shown it was using the least speech-restrictive approach possible.

Keep reading

And There It Was — One Sentence in Declassified Docs Proves We All Were Censored to Protect Hillary

It’s hard to fathom the evil that led to creating fake intelligence to frame then-presidential candidate Donald Trump as a Russian spy and traitor to his country. We’ve seen in recently declassified documents that Hillary Clinton didn’t do this alone. She commanded dozens, if not hundreds, of willing participants, including the George Soros foundation, President Barack Obama, the FBI, the weaponized intelligence agencies, and others, to bring Donald to his knees. 

Buried deep in the latest tranche of declassified documents called the Durham annex report, however, a deep state hawk has found something that is equally or more evil than changing intelligence to manipulate one election’s outcome. They created a way to take over elections in perpetuity. 

Mike Benz, a former State Department official in the Trump 45 administration who now runs the Foundation for Freedom Online, found what is tantamount to the Rosetta stone for the Censorship Industrial Complex. 

And it’s right here: “The point is making the Russian play a U.S. domestic issue. Say something like a critical infrastructure threat for the election to feel menace [sic] since both POTUS and VPOTUS have acknowledged the fact [that] IC would speed up searching for evidence that is regrettably still unavailable.”

In a series of posts on X, Benz laid out why this sentence from an email from a Soros Open Society Eurasian official told the story of how they planned to get Trump even after they’d lost the election. They magic’d up a system whereby elections would now become “critical infrastructure” run by the feds.

Keep reading

This Hollywood-Backed Bill Would Give Government Power To Block Websites

Lawmakers in Washington are once again attempting to give the United States a legal pathway to block websites, a power the federal government has never officially held on a broad scale.

The latest push comes in the form of the Block Bad Electronic Art and Recording Distributors Act, better known as “Block BEARD,” introduced in the Senate by Thom Tillis, Chris Coons, Marsha Blackburn, and Adam Schiff.

We obtained a copy of the bill for you here.

On its face, the bill targets foreign websites accused of piracy. But the mechanism it creates would establish something far more significant: a formal, court-approved process that could be used to make entire websites vanish from the American internet.

Under the proposal, copyright owners could go to federal court to have a site labeled a “foreign digital piracy site.” If successful, the court could then order US service providers to block access to that site.

The reach is broad. The term “service provider” here mirrors the broad definition in the DMCA, potentially covering everything from ISPs and search engines to social media platforms, and perhaps even VPNs.

Proponents say this is about protecting the entertainment industry. In reality, it’s about setting a precedent. Once the government has a tool to block certain sites, history shows the definition of “unacceptable” content can expand. Piracy today could easily become something else tomorrow.

The ramifications go beyond the music and movie business. If courts can order an ISP to make a site disappear from view, the same logic could eventually apply to other types of content deemed problematic.

And because the bill has no public transparency requirements, the public could be kept entirely in the dark about which sites are blocked, why they’re blocked, or how long the blocks remain in place.

Supporters in the entertainment industry, including the RIAA and Motion Picture Association, are openly cheering the bill, pointing to similar measures overseas they claim have worked without harming free speech.

But the US is not the same as other countries. The First Amendment’s protection of speech and access to information means this kind of censorship tool carries far more constitutional baggage here than it does elsewhere.

What Block BEARD really represents is a milestone. If passed, it would be the first time the US creates a standing legal process for cutting off access to entire websites at the network level.

Keep reading

Irish High Court Rejects X’s Challenge to Online Censorship Law

The Irish High Court has thrown out a legal challenge by X, dealing a blow to the company’s pushback against Ireland’s new censorship rules for online video-sharing services.

X had taken aim at Coimisiún na Meán, the country’s media watchdog, accusing it of stepping beyond legal limits with its Online Safety Code.

The rules demand that platforms hosting user-generated videos take active steps to shield users from “harmful” material. The company had described the regulator’s actions as “regulatory overreach.”

Mr Justice Conleth Bradley, delivering judgment on Wednesday, found no merit in X’s application for judicial review. The court concluded that the regulator’s code was lawful and that its provisions fell within the scope of both the EU’s Audiovisual Media Services Directive (AVMSD) and Ireland’s 2009 Broadcasting Act.

According to the ruling, the code does not clash with the Digital Services Act and can function in tandem with EU law.

Responding to the outcome, Coimisiún na Meán said it welcomed the decision and intended to examine the ruling closely before offering more detailed comment.

The case comes as X begins rolling out new age verification systems to meet obligations under the Irish code, alongside compliance efforts aimed at satisfying UK and wider EU digital censorship regulations.

The ruling marks a significant moment in the ongoing struggle over who decides the boundaries of online speech and content moderation.

Keep reading

House probes whether EU, Biden administration pushed Spotify to censor podcasters including Joe Rogan, Steve Bannon

The House Judiciary Committee on Tuesday launched an investigation into whether the EU and Biden administration pressured Spotify to censor free speech, The Post has learned.

Censorship has been a point of tension for Spotify, which has faced heated backlash for flagging COVID-19 information from podcaster Joe Rogan and banning Steve Bannon from the platform.

“More relevantly, it’s the pressure we are seeing the EU put on companies to censor more,” a source familiar with the probe told The Post.

In a letter sent to Spotify CEO Daniel Ek, US Rep. Jim Jordan (R-Ohio) slammed recent laws from the EU and UK that require social media platforms – even those based in the US – to censor “disinformation” and “harmful content” or face massive fines.

“These foreign laws, regulations, and judicial orders may limit or restrict Americans’ access to constitutionally protected speech in the United States. Indeed, that appears to be their very purpose,” Jordan wrote in a copy of the letter obtained by The Post.

Keep reading

Steam Purges Games Under Pressure from Visa and Mastercard’s Unseen Censorship

Somewhere between your mouse click and a purchase, a private boardroom full of executives quietly decided what you’re allowed to see, support, or sell. They don’t run your favorite website. They’re not elected lawmakers. But if Visa or Mastercard doesn’t like the look of a transaction, that transaction ceases to exist. That piece of content, that creator, that platform: gone.

There are a lot of complaints in tech circles about who’s getting deplatformed by YouTube this week. Meanwhile, the most consequential censorship in the digital economy has nothing to do with social media and everything to do with whether a little plastic rectangle will greenlight your purchase. And there’s no appeals process. No trial. Just a silent ax falling from a credit card duopoly that nobody elected and nobody seems able to challenge.

Take the recent purge of over 50 adult-themed games from Steam, the dominant digital PC game store. No new law had passed. It was a threat from Visa and Mastercard, quietly relayed like an old-school mafia warning. Valve, Steam’s parent company, made it clear: “We were recently notified that certain games on Steam may violate the rules and standards set forth by our payment processors and their related card networks and banks.”

In other words: “We’d like to keep making money.”

Valve didn’t wake up with a sudden newfound sense of moral hygiene. It was the payment processors. They pulled the fire alarm, and Steam complied like any rational hostage trying to keep the electricity on.

That’s what happens when the pipes of global commerce are guarded by a pair of unaccountable financial institutions that somehow got into the censorship business without anyone noticing.

Visa and Mastercard are no longer just companies. They’re gatekeepers of moral acceptability.

One day your art is fine, the next it’s too spicy for the algorithms; or worse, for the boardroom optics team. And if they decide your platform has crossed some invisible line? That’s it. No explanation required. No appeals offered. The economic oxygen gets cut off and there’s no recourse.

It’s one thing to be beholden to government regulations. It’s another when your business is held hostage by a pair of logos with an embossed hologram.

Keep reading