Crushing the Right to Conscientiously Object

As the U.S. and Israel’s deeply unpopular war with Iran enters its second week, social media platform X is censoring the accounts of people providing information to military servicemembers on how they can refuse to serve. This is particularly relevant as fears have grown that U.S. ground troops may enter the conflict.

The Center on Conscience & War, an 80-year-old nonprofit that, according to its website, “advocates for the rights of conscience, opposes military conscription, and serves all conscientious objectors to war,” was banned on X for 12 hours. The center’s executive director, Mike Prysner, shared a notice that the center received from X which labeled their posts as having “violated X rules” against “illegal and regulated behaviors.”

Prysner wrote: “This is the post @CCW4COs was suspended for, informing service members of their legal right under DoDI 1332.14 to report “failure to adapt” within first 365 days of service and receive an entry-level discharge.”

It remains legal to conscientiously object to military service. The only conceivable way that the post could be framed as encouraging illegal or irregular behavior would be to recast such objections as mutiny, which is exactly what pro-Israeli voices on social media have been frantically doing in the last few days.

In response to conservative commentator Candace Owens also encouraging those in the U.S. military to conscientiously object to serving in Iran, pro-Israel journalist Emily Schrader wrote on X:

“This is illegal. She is literally advocating mutiny. Under 18 U.S.C. § 2387 (Advocating overthrow or disloyalty in the armed forces). It is a crime for any person, including civilians, to willfully advocate or attempt to cause:
• insubordination in the armed forces
• disloyalty among service members
• mutiny or refusal of duty
It also criminalizes distributing materials intended to encourage those outcomes.
The penalty can be up to 10 years in prison and fines.”

Other pro-Israel voices like Bill Ackman, the billionaire hedge-fund manager, reposted Shrader’s sentiments.

The social media ban on the Center for Conscience and War came less than 24 hours after its executive director, Prysner, also wrote via social media regarding anecdotal evidence of troops being readied for combat:

“I just spoke with the mother of a service member in this unit. They were given one last call home before having to turn in their phones. He told his mom they were going ‘boots on the ground’ tonight.”

Keep reading

How Grok’s Football Roasts Put X in the Crosshairs of Britain’s Online Censorship Law

Few subjects in Britain carry as much emotional weight as football. Club loyalty runs deep, tragedies remain painfully close to the surface, and rivalries often cross the line between banter and cruelty. That volatile mix resurfaced this week when Grok, the AI chatbot on X, generated what officials described as “vulgar roasts” after users explicitly prompted it to produce offensive material.

UK authorities reacted quickly, discussing the Online Safety Act, Britain’s new censorship law, and raising the possibility of serious financial penalties for X. Under the law, platforms can face fines reaching up to ten percent of global revenue if they fail to address harmful content.

The material dredged up some of the most painful chapters in English football history. It mocked the Hillsborough disaster, where 97 Liverpool supporters were crushed to death at an FA Cup semi-final in Sheffield after police failures led to fatal overcrowding in a standing pen.

It also referenced the Munich air disaster, which killed 23 people, including eight Manchester United players, when the team’s aircraft crashed during takeoff in icy conditions. Grok further alluded to the recent death of Diogo Jota, who died in a car accident in Spain in June 2025 at the age of 28 while playing for Liverpool F.C.

Keep reading

Turkey Blocks 41 Social Media Accounts Over Iran War Posts

Turkey’s government blocked 41 social media accounts on X, Facebook, and Instagram last Friday, deleted content from 75 more, and launched criminal proceedings against account holders, all on the grounds that they spread what officials called “disinformation and provocative content.”

The crackdown followed the start of attacks on Iran. Presidential Communications Director Burhanettin Duran framed the deletions as a national security response, saying the targeted accounts had been “systematically sharing unverified content aimed at creating fear, panic and uncertainty in society.”

Who decided the content was disinformation? The government. Who gets to define “provocative content”? The government. Who determines what threatens “public order, social peace, and our national security”? Also, the government; the same government that ordered the blocks.

The operation involved the Turkish Presidency’s Communications Directorate, the cybercrime department of the Security Directorate General, the Information and Communication Technologies Authority, and the chief public prosecutors’ offices. A coordinated state apparatus, mobilized to silence social media accounts during a regional conflict.

Keep reading

Roblox Introduces AI System That Rewrites Users’ Chat Messages in Real Time

Roblox has started rewriting its users’ chat messages in real time using AI, altering what people actually typed into something the platform considers more appropriate.

The feature, rolling out now, goes further than the existing filter that replaces flagged words with “#” symbols. Under the new system, banned language gets silently reworded into what Roblox calls “more respectful language that remains closer to the user’s original intent.”

The platform’s example: type “Hurry TF up!” and the message your recipient sees reads “Hurry up!” Roblox says everyone in the chat is notified when this happens, though the person who typed the original message has no way to stop the substitution before it goes out.

The definition of “banned language” extends beyond profanity. It covers “misspellings, special characters, or other methods to evade detection of profanity,” meaning the AI is also tasked with catching deliberate workarounds and rewriting those too.

Roblox is simultaneously expanding its text filtering system to “detect more variations of language that break its Community Standards,” so the net is getting wider at the same time, and the consequences of being caught in it are changing.

What Roblox has built is a system that goes beyond blocking speech. It replaces it. The message that leaves your keyboard is not the message that arrives. The recipient reads words you didn’t choose, attributed to you, with a notification that your original phrasing was deemed unacceptable. The platform decides what you said.

Keep reading

Liberal Carney government moves to end debate on bill that could criminalize quoting Bible

Conservative MP Andrew Lawton warned that the Liberal government intends force an end to debate on Bill C-9, the censorship bill that has attracted a massive backlash from religious Canadians because it would remove protections for sincerely held religious beliefs, particularly regarding LGBT issues.

“The Liberals have put a motion on notice in the House of Commons to cut off debate on Bill C-9 and force all amendments to a vote with no discussion,” Lawton wrote on X on March 5. “They are censoring debate on their censorship bill.”

The motion, tabled as “Government Business,” instructs the Standing Committee on Justice and Human Rights to “immediately resume clause-by-clause consideration of the bill whereupon all remaining amendments submitted to the committee shall be deemed moved” and that “the Chair shall put the question, forthwith and successfully, without further debate on all remaining clauses and proposed amendments and subamendments.”

This means that votes will be called on all amendments, and the meeting cannot end until Bill C-9 has passed review. A report will then be sent to Parliament “no later than two sitting days after the completion of clause-by-clause consideration,” and the bill would then go through both the report stage and third reading in a single sitting day each.

“WOW,” Conservative MP Garnett Genuis responded to the news on X. “Carney is now trying to ram through C-9 ‘without further debate on all remaining clauses’ at committee. This is deeply disturbing. Call your MP now and tell them to oppose this attack on freedom of speech and freedom of religion.”

Keep reading

Florida Has Deemed All Existing Intro to Sociology Textbooks Illegal and Produced Its Own

Imagine the following scenario: You’re teaching Introduction to Sociology at a community college in Florida, and today, you’re trying to explain the well-documented pay gap between men and women in the United States. You check the guidance you just received from your dean, who received instructions via email from the executive vice chancellor of the Florida College System. The instructions state explicitly that explaining “unequal outcomes between men and women” in terms of “institutional sexism” would violate state law.

So how are you supposed to explain this disparity? The email includes guidance on just this question:

biological sex chromosomes determine … how females and males behave … So, in teaching this, one might point out that women and men with the same credentials enter different jobs such that certain jobs are occupied primarily by women (i.e., female-dominant) some are occupied primarily by men (i.e., male-dominant).

Did you misread the guidance? Your eyes scroll up on the page, which is a state-created curriculum for use in all non-elective Intro to Sociology classes taught in Florida’s community colleges. You are explicitly prohibited from discussing “systemic racism, institutional racism, [or] historical discrimination.” You cannot “state an intent of institutions today to oppress persons of color.” You cannot “describe when, how, or why individuals determine their sexual orientation and/or gender identity.”

Surely this is a mistake?

Keep reading

Britain is Trying to Censor Americans – But America is Fighting Back

Ofcom has confirmed it is referring 4chan to a final enforcement decision under the Online Safety Act. The target is a Delaware company that runs an entirely anonymous imageboard from the United States, with no offices, staff, servers or assets in Britain. The demand: install age-verification systems and content filters so that British children cannot access the site or face daily fines levied from London on an American platform. This case is not an outlier. It is the clearest real-world demonstration of what the new generation of “online safety” laws requires: private companies must build automated filters that decide, in advance, which legal speech is too harmful for minors to see. The question the regulators never quite answer is simple: what exactly does the filter catch?

In the early 2020s, a political consensus formed on both sides of the Atlantic: social media is harming children and something must be done. The result in Washington was the Kids’ Online Safety Act (KOSA); in Westminster, the Online Safety Act (OSA), which received Royal Assent in October 2023 and began enforcement in 2025. The political appeal of both measures is genuine. Adolescent mental health deteriorated in the 2010s, parents are alarmed and platforms have appeared indifferent. But good intentions do not make good law, and the form these interventions took is constitutionally and morally indefensible. Both KOSA and the OSA rest on a duty-of-care model: platforms must take “reasonable measures” or implement “proportionate systems” to prevent minors from encountering content associated with depression, anxiety, eating disorders, self-harm and suicide. This is not a regulation of conduct. It is a mandate to suppress speech based on its topic and its predicted emotional effect on a reader: the very definition of content-based regulation.

The American Civil Liberties Union (ACLU) stated the constitutional problem plainly in its July 2023 letter opposing KOSA: the bill “is a content-based regulation of constitutionally protected speech” that “will silence important conversations, limit minors’ access to potentially vital resources and violate the First Amendment”.  Under Reed v. Town of Gilbert, a law is content-based if it “applies to particular speech because of the topic discussed or the idea or message expressed”. Content-based regulations are “presumptively unconstitutional”.

Keep reading

Britain and Europe are struggling economically; their response? Regulate the world

It used to be said that the sun never set on the British Empire, so far-flung were its possessions. Britain has long since retreated from most of those territories, most recently, and controversially, in its attempt to relinquish control of the Chagos Islands. Yet even as it sheds physical dominion, Britain appears increasingly eager to export something else: its laws and regulations. 

In that project, it is joined enthusiastically by its former partners in the European Union. If the Old World has one major export left, it is bureaucracy.

The most obvious current target is X, Elon Musk’s platform, and its Grok AI tool. Some users of questionable taste quickly discovered that Grok could be used to generate deepfake images of celebrities in revealing attire. More seriously, it was alleged that the technology had been used to generate sexualised images of children. In response, last month the UK’s communications regulator, Ofcom, opened a formal investigation under the Online Safety Act, citing potential failures to prevent illegal content. The possible penalties are severe, ranging from multi-million-pound fines, based on the company’s global revenue, to a complete ban on the platform in the UK.

Senior British officials were quick to escalate the rhetoric. Prime Minister Keir Starmer and Technology Secretary Liz Kendall publicly condemned X and emphasised that all options, including nationwide blocking, were on the table. The message was unmistakable; compliance would be enforced, one way or another.

Two days later, X announced new restrictions to prevent Grok from editing images of real people into revealing scenarios and to introduce geo-blocking in jurisdictions where such content is illegal. Ofcom described these changes as “welcome” but insufficient, insisting its investigation would continue. Meanwhile, pressure spread outward. Other governments announced restrictions, and the European Commission expanded its own probes under the Digital Services Act. What began as a British enforcement action quickly morphed into coordinated global pressure, effectively pushing X toward worldwide policy changes.

This is the crucial point. British regulators were not merely seeking compliance for British users. They were pressing for changes to X’s global policies and technical architecture to govern speech and expression far beyond the UK’s borders. What might initially have been framed as a failure to impose sensible safeguards on a powerful new tool has become a test case for whether regulators in one jurisdiction can dictate technological limits everywhere else.

This pattern is not new. Ofcom has already attempted to extend its reach directly into the United States, brushing aside the constitutional protections afforded to Americans. Since the Online Safety Act came into force in 2025, Ofcom has adopted an aggressively expansive interpretation of its authority, asserting that any online service “with links to the UK,” meaning merely accessible to UK users and deemed to pose “risks” to them, must comply with detailed duties to assess, mitigate, and report on illegal harms. Services provided entirely from abroad are explicitly deemed “in scope” if they meet these criteria.

The flashpoints have been 4chan and Kiwi Farms, two US-based forums notorious for unmoderated speech and even harassment campaigns. In mid-2025, Ofcom initiated investigations into both for failing to respond to statutory information requests and for failing to complete the required risk assessments. It ultimately issued a confirmation decision against 4chan, imposing a £20,000 fine plus daily penalties for continued non-compliance, despite the site having no physical presence, staff, or infrastructure in the UK.

Rather than comply, the operators of both sites filed suit in US federal court, arguing that Ofcom’s actions violate the First Amendment and that the regulator lacks jurisdiction to enforce British law against American companies. The litigation frames the dispute starkly: whether a foreign regulator may, through regulatory pressure, compel changes to lawful American speech.

That question has now spilt into US politics. Senior American officials have criticised Ofcom’s posture as an extraterritorial threat to free speech, and at least one member of Congress has threatened retaliatory legislation. What Britain views as online safety increasingly appears, from across the Atlantic, to be regulatory imperialism.

Keep reading

Left-wing ideology is being encoded into AI systems to censor “wrongthink”

In 2021, a group of researchers dramatically departed OpenAI, the company behind ChatGPT. Led by Dario Amodei, OpenAI’s former vice president of research, they cited deep concerns about “AI safety.” The company was moving too fast, they warned, prioritising commercial interests over humanity’s future. The risks were said to be existential. These Effective Altruists were going to do things the right way.

Their solution? Start a new company called Anthropic, premised on building AI “the right way” with “safety” (that word will become a recurring theme), and “proper guardrails.” They initially raised hundreds of millions (today, that number is in the tens of billions) from investors who bought the pitch: we’re the good guys preventing runaway artificial general intelligence (“AGI”).

Noble, right? Except these supposed guardrails against AGI have become pretty much impossible to quantify. What we do have is an incredibly sophisticated content moderation system that filters inquiries and commands through a Silicon Valley thought bubble. It doesn’t seem like they’re trying to prevent AGI from destroying humanity, but instead, to prevent you from challenging the core tenets of their political philosophy.

Go ahead and try to generate content questioning climate ideology, the trans agenda, voter ID laws or election integrity, and watch the “safety” guardrails kick in.

This isn’t about preventing Skynet. It’s about making sure AI parrots the right opinions and associates with the right kind of people.

Now that Anthropic is its own technology giant of an AI company, they are facing the same critiques from true believers in the space. Amodei has put his principles on hold to allow for foreign investment from Gulf states with a poor human rights track record. However, the company remains guided by a secular progressive “philosopher” whose values remain entirely detached from America’s founding ideas.

Keep reading

HEROIC: CBS Austin Reporter Caught on Hot Mic Allegedly DEFYING Direct Orders from Higher-Ups to Censor Pro-Trump, Pro-Israel Crowds Following Iran Strikes

A local news reporter in Austin, Texas, was caught on a hot mic allegedly refusing to follow orders from his corporate overlords to bury the truth.

As the world reacts to the decisive joint U.S.-Israel strikes on Iran, the fake news media apparatus is working overtime to suppress the massive wave of public support for President Trump and Prime Minister Netanyahu.

But at CBS Austin, the narrative hit a major roadblock in the form of reporter Vinny Martorano.

During the live feed staging, Martorano was seen being handed a phone by a crew member, who appeared to relay a directive from higher-ups regarding how to frame the unfolding demonstration at the Texas Capitol last weekend.

CREW MEMBER: [Hands Martorano a phone, likely showing a directive from the newsroom]

VINNY MARTORANO: What does that mean?

CREW MEMBER: It means they don’t want us to focus on this.

MARTORANO: All right. Well, I am.

Despite the clear orders from his “higher-ups” to deemphasize the pro-American, pro-Israel sentiment, Martorano refused to be a puppet for the radical left’s agenda.

Martorano then delivered a balanced report that acknowledged the reality on the ground.

Martorano told the viewers (presumably in the live shot):

All right. There are a lot of mixed opinions across Austin about the joint attack between the United States and Israel against Iran that happened earlier this morning. Some people like this group behind me are thanking Trump and the United States government for following through with this attack against Iran.

Keep reading