Are We in a Free Speech Recession?

For years, debates over hate speech laws have been framed as moral disputes about civility and protection. Increasingly, however, they are becoming legal and political battles over the limits of “free” expression in democratic societies. 

A report by the Future of Free Speech project, titled The Free Speech Recession Hits Home, argues that established democracies are experiencing measurable declines in protections for speech once considered firmly safeguarded. The report contends that restrictions once associated primarily with authoritarian regimes are now expanding across Western countries under the banner of combating hate, misinformation, and extremism. 

Hate speech laws are being broadly interpreted all over the Western world, and their continued expansion is reshaping the boundaries of lawful expression. 

Keep reading

Britain and Europe are struggling economically; their response? Regulate the world

It used to be said that the sun never set on the British Empire, so far-flung were its possessions. Britain has long since retreated from most of those territories, most recently, and controversially, in its attempt to relinquish control of the Chagos Islands. Yet even as it sheds physical dominion, Britain appears increasingly eager to export something else: its laws and regulations. 

In that project, it is joined enthusiastically by its former partners in the European Union. If the Old World has one major export left, it is bureaucracy.

The most obvious current target is X, Elon Musk’s platform, and its Grok AI tool. Some users of questionable taste quickly discovered that Grok could be used to generate deepfake images of celebrities in revealing attire. More seriously, it was alleged that the technology had been used to generate sexualised images of children. In response, last month the UK’s communications regulator, Ofcom, opened a formal investigation under the Online Safety Act, citing potential failures to prevent illegal content. The possible penalties are severe, ranging from multi-million-pound fines, based on the company’s global revenue, to a complete ban on the platform in the UK.

Senior British officials were quick to escalate the rhetoric. Prime Minister Keir Starmer and Technology Secretary Liz Kendall publicly condemned X and emphasised that all options, including nationwide blocking, were on the table. The message was unmistakable; compliance would be enforced, one way or another.

Two days later, X announced new restrictions to prevent Grok from editing images of real people into revealing scenarios and to introduce geo-blocking in jurisdictions where such content is illegal. Ofcom described these changes as “welcome” but insufficient, insisting its investigation would continue. Meanwhile, pressure spread outward. Other governments announced restrictions, and the European Commission expanded its own probes under the Digital Services Act. What began as a British enforcement action quickly morphed into coordinated global pressure, effectively pushing X toward worldwide policy changes.

This is the crucial point. British regulators were not merely seeking compliance for British users. They were pressing for changes to X’s global policies and technical architecture to govern speech and expression far beyond the UK’s borders. What might initially have been framed as a failure to impose sensible safeguards on a powerful new tool has become a test case for whether regulators in one jurisdiction can dictate technological limits everywhere else.

This pattern is not new. Ofcom has already attempted to extend its reach directly into the United States, brushing aside the constitutional protections afforded to Americans. Since the Online Safety Act came into force in 2025, Ofcom has adopted an aggressively expansive interpretation of its authority, asserting that any online service “with links to the UK,” meaning merely accessible to UK users and deemed to pose “risks” to them, must comply with detailed duties to assess, mitigate, and report on illegal harms. Services provided entirely from abroad are explicitly deemed “in scope” if they meet these criteria.

The flashpoints have been 4chan and Kiwi Farms, two US-based forums notorious for unmoderated speech and even harassment campaigns. In mid-2025, Ofcom initiated investigations into both for failing to respond to statutory information requests and for failing to complete the required risk assessments. It ultimately issued a confirmation decision against 4chan, imposing a £20,000 fine plus daily penalties for continued non-compliance, despite the site having no physical presence, staff, or infrastructure in the UK.

Rather than comply, the operators of both sites filed suit in US federal court, arguing that Ofcom’s actions violate the First Amendment and that the regulator lacks jurisdiction to enforce British law against American companies. The litigation frames the dispute starkly: whether a foreign regulator may, through regulatory pressure, compel changes to lawful American speech.

That question has now spilt into US politics. Senior American officials have criticised Ofcom’s posture as an extraterritorial threat to free speech, and at least one member of Congress has threatened retaliatory legislation. What Britain views as online safety increasingly appears, from across the Atlantic, to be regulatory imperialism.

Keep reading

Jiminy Cricket, German Hurt #Feelingz Strike Again

This is your weekly reminder that just because the Europeans do wounded vanity, umbrage, and virtue-signaling lectures so much better than we do doesn’t necessarily mean that what is issuing forth from their sanctimonious streusel holes matches what their barely restrained authoritarian instincts are doing in actuality.

One of the best ways to highlight the dual nature of their conflicted existence is to pick some of the subjects that constitute their favorite defensive posturing. A couple that are really getting worn out in tussles against the Americans are ‘freedom’ and ‘democracy.’ These are often used interchangeably, or, even better, when they combine them for what Europeans perceive as a coup de grâce argument-ender.

This is all well and good posturing in the Never Never Land of diplomatic sword crossing for public effect. But in the trenches, streets, and homes of regular citizens, mayhap things look a little different than the rosy, beatific picture of pastoral freedoms and hives of honey-laden democracy painted by the Brussels Brahmins from their castles.

A prime example has surfaced.

I’ve often written of the skeletal, scarecrowish, bespectacled creature who clings with bony fingers to visions of wielding outsized international influence even as he reigns over the meltdown of his own once prosperous and mighty economy as Chancellor of Germany. From the very beginning, from the day after his election, in fact, I dubbed the risible Friedrich Merz ‘Master of the Old Magoo‘ in reference to his chameleon-like ability to change his colors – and dump his promises – at a moment’s notice.

This conscience-free, unscrupulous, ethically unconcerned style of governance has not gone unnoticed or is much appreciated by Germans themselves, and imagine that. It’s kind of like being a dewy-eyed Spanberger or Mamdani voter right about now, only Germany is more of a police state trying to deal with it.

Did I say that?

*checks notes*

Why, yes. Yes, I did.

Keep reading

Macron Calls Online Free Speech Argument “Pure Bullshit”

European governments framing social media bans for minors as child protection are quiet about what those bans actually require: identity checks for everyone. Every adult who wants to use Instagram, TikTok, or YouTube in France, Spain, or Germany would need to verify their real-world identity to access the platform. Anonymity, one of the oldest protections for dissenting speech, goes with it.

That’s the context Emmanuel Macron left out when he called free speech online “pure bullshit” in New Delhi on Wednesday.

The French president was addressing companies and their American backers as European governments push social media restrictions, as well as curbs on “hate speech,” a move the Trump administration has criticized as censorship.

Macron’s counterargument is based on algorithmic opacity. “Having no clue about how their algorithm is made, how it’s tested, trained, and where it will guide you, the democratic consequences of this bias could be huge,” he said.

Keep reading

Europeans Testify On How Europe Is Banning Americans From Saying What They Believe

he European Union now constantly violates fundamental Western rights to freedom of speech and freedom of religion and claims the power to ban speech across the globe, European witnesses testified to the U.S. Congress Wednesday morning.

“European laws [are] now being exported by the European Union. … American speech is already being affected,” testified Lorcán Price, an Irish lawyer for Alliance Defending Freedom International.

Under “hate speech” policies that Europe is applying across the world, “Speech that is lawful today can become criminalized tomorrow. This should concern every person that values freedom,” testified Finnish Member of Parliament Päivi Räsänen. Irish comedian Graham Linehan also testified before the U.S. House Judiciary Committee. In September 2025, Linehan was arrested at Heathrow Airport by British authorities for criticizing transgender policies. Because of that arrest, he testified, “I became the target of a series of harassment campaigns that cost me my career, my marriage, and eventually drove me from my homeland.”

These Republican witnesses testified alongside Democrat witness Deepinder Singh Mayell, the leader of Minnesota’s branch of the American Civil Liberties Union. Democrats on the committee focused on blasting federal immigration enforcement and insisting that attacking and harassing law enforcement is protected “free speech.” Mayell refused to answer when Rep. Chip Roy, R-Texas, asked if illegal alien child rapists and murderers should be deported.

Keep reading

House Report: EU Pushed Tech Giants to Police US Speech

A newly released report from the House Judiciary Committee reveals a coordinated effort by European Union regulators to pressure major technology companies into enforcing censorship standards that extend far beyond Europe’s borders.

The findings, drawn from thousands of internal documents and communications, detail a long-running strategy to influence global content moderation policies through regulatory coercion and the threat of punishment under Europe’s Digital Services Act (DSA).

The Committee’s latest publication, “The EU Censorship Files, Part II,” coincides with a scheduled February 4 hearing titled “Europe’s Threat to American Speech and Innovation: Part II.”

We obtained a copy of the report for you here

According to the materials, European officials have been meeting privately with social media companies since at least 2015 to “adapt their terms and conditions” to align with EU political priorities, including restricting certain kinds of lawful political expression in the United States.

Internal records from TikTok, then-Twitter, and other firms show that the Commission’s so-called “voluntary” DSA election guidelines were in fact treated as mandatory conditions for doing business in Europe.

Keep reading

France’s Raid on X Opens New Front in Europe’s War Over Online Speech

French prosecutors staged a morning raid at the Paris offices of social media platform X, part of a criminal investigation coordinated with Europol.

The operation, launched in 2025, targets allegations ranging from the alleged distribution of sexual deepfakes to algorithmic manipulation.

The cybercrime division in Paris is exploring whether X’s automated systems may have been used in an “organized structure” to distort data or suppress information.

The alleged offenses are as follows:

  • Denial of crimes against humanity (Holocaust denial)
  • Fraudulent extraction of data from an ⁠automated data processing system ​by an organized group
  • Falsification of the operation ‌of ‌an automated data processing system by an organized group
  • Defamation of a person’s image (deepfakes of ​sexual nature, including minors)
  • Operating of an illegal online platform by an organized group

Prosecutors have now summoned Elon Musk and former CEO Linda Yaccarino for questioning in April. “Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,” the office said.

Yaccarino, who left in mid-2025, might find herself reliving the company’s most volatile months, when X faced regulatory crossfire across the continent for refusing to comply with what it called political censorship demands.

The case actually began with two complaints in January 2025, including one from French lawmaker Eric Bothorel, who accused X of narrowing “diversity of voices and options” after Musk’s takeover.

Bothorel cited “personal interventions” in moderation decisions, a line that seemed more about ideology than algorithms.

As the investigation grew, prosecutors took interest in Grok, X’s AI system, which allegedly produced “Holocaust denial content” and “sexual deepfakes.” The Paris prosecutor’s office soon announced it was examining “biased algorithms.”

Musk called the whole affair a “politically-motivated criminal investigation,” and considering Europe’s recent appetite for speech regulation, it’s not a stretch to see why he’d think that.

Keep reading

TikTok CEO Reveals Coordination With 2 Dozen Jewish Groups to Police Speech

A chilling blueprint for the censorship of pro-Palestine voices on social media has been exposed, directly from the mouth of a top tech executive. Adam Presser, the newly installed CEO of TikTok’s U.S. operations following its forced sale to a consortium led by billionaire Larry Ellison, detailed in a recent resurfaced video how the platform systematically silenced critics by labeling their speech as hateful. This admission confirms the worst fears of free speech advocates and reveals a coordinated effort to shield Israeli government actions from public scrutiny by conflating political criticism with bigotry.

The video, originally presented to the World Jewish Congress, features Presser, who was then TikTok’s Head of Operations and Trust & Safety, outlining specific policy changes. “We made a change to designate the use of the term Zionist as a proxy for a protected attribute as hate speech,” Presser stated. In practice, this means using “Zionist” in a negative context could get a user banned, while phrases like “proud Zionist” remain permitted. This creates a politically motivated double standard where one side of a heated geopolitical debate is granted linguistic immunity.

A tripling of bans and outside influence

Presser boasted of aggressive enforcement, revealing that TikTok “tripled the amount of accounts that we were banning for hateful activity” over the course of 2024. This timeline coincides directly with the global outcry following Israel’s military offensive in Gaza. He further explained that “over two dozen Jewish organizations” are “constantly feeding us intelligence and information when they spot violative trends,” and that these groups help inform TikTok on “what is hate speech.” This outsourcing of content moderation decisions to explicitly partisan advocates strips away any pretense of neutrality, effectively allowing pro-Israel groups to police and silence their critics on a global platform.

The consequences of this policy are not theoretical. Award-winning Palestinian journalist Bisan Owda, who had built an audience of 1.4 million followers on TikTok while documenting the war from Gaza, recently found her account permanently banned. In a video, Owda connected her ban directly to Presser’s remarks and to comments from Israeli Prime Minister Benjamin Netanyahu, who last year called the TikTok purchase “consequential” and stated, “We have to fight with the weapons that apply to the battlefield in which we engage, and the most important ones are social media.”

Keep reading

This 1996 Law Protects Free Speech Online. Does It Apply to AI Too?

We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online. It enabled the rise of search engines, social media, and countless platforms that make our modern internet a thriving marketplace of all sorts of speech.

Its first 26 words have been vital, if controversial, for protecting online platforms from liability for users’ posts: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” If I defame someone on Facebook, I’m responsible—not Meta. If a neo-Nazi group posts threats on its website, it’s the Nazis, not the domain registrar or hosting service, who could wind up in court.

How Section 230 should apply to generative AI, however, remains a hotly debated issue.

With AI chatbots such as ChatGPT, the “information content provider” is the chatbot. It’s the speaker. So the AI—and the company behind it—would not be protected by Section 230, right?

Section 230 co-author former Rep. Chris Cox (R–Calif.) agrees. “To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,” Cox told The Washington Post in 2023. “So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense.”

But even if AI apps create their own content, does that make their developers responsible for that content? Alphabet trained its AI assistant Gemini and put certain boundaries in place, but it can’t predict Gemini’s every response to individual user prompts. Could a chatbot itself count as a separate “information content provider”—its own speaker under the law?

That could leave a liability void. Granting Section 230 immunity to AI for libelous output would “completely cut off any recourse for the libeled person, against anyone,” noted law professor Eugene Volokh in the paper “Large Libel Models? Liability for AI Output,” published in 2023 in the Journal of Free Speech Law.

Treating chatbots as independent “thinkers” is wrong too, argues University of Akron law professor Jess Miers. Chatbots “aren’t autonomous actors—they’re tightly controlled, expressive systems reflecting the intentions of their developers,” she says. “These systems don’t merely ‘remix’ third-party content; they generate speech that expresses the developers’ own editorial framing. In that sense, providers are at least partial ‘creators’ of the resulting content—placing them outside 230’s protection.”

The picture gets more complicated when you consider the user’s role. What happens when a generative AI user—through simple prompting or more complicated manipulation techniques—induces an AI app to produce illegal or otherwise legally actionable speech?

Under certain circumstances, it might make sense to absolve AI developers of responsibility. “It’s hard to justify holding companies liable when they’ve implemented reasonable safeguards and the user deliberately circumvents them,” Miers says.

Liability would likely turn on multiple factors, including the rules programmed into the AI and the specific requests a user employed.

Keep reading

Miami Beach police chief defends detectives’ visit to activist over Facebook post about mayor

Miami Beach Police Chief Wayne Jones issued a statement Friday explaining why detectives visited the home of a local political activist earlier this week following a social media comment about Mayor Steven Meiner.

“Given the real, ongoing national and international concerns surrounding antisemitic attacks and recent rhetoric that has led to violence against political figures,” Jones wrote that he “directed two of his detectives to initiate a brief, voluntary conversation regarding certain inflammatory, potentially inciteful false remarks made by a resident to ensure there was no immediate threat to the elected official or the broader community that might emerge as a result of the post.”

He went on to write that “the interaction was handled professionally and at no time did the mayor or any other official direct me to take action.”

The statement comes after Raquel Pacheco, a Miami Beach political activist and veteran who previously ran for city commission and a Democratic state Senate candidate, said Miami Beach detectives arrived at her home.

“He said, ‘We are here to talk to you about a Facebook comment’ and I said – ‘What? Is this really happening?” Pacheco told Local 10 News.

Pacheco had commented on a Facebook post by Meiner, who is Jewish, in which he described Miami Beach as “a safe haven for everyone,” contrasting it with New York City, which he said was “intentionally removing protections” for and “promoting boycotts” of Israeli and Jewish businesses.

Pacheco responded to the post by writing, “The guy who consistently calls for the death of all Palestinians, tried to shut down a theater for showing a movie that hurt his feelings, and REFUSES to stand up for the LGBTQ community in any way (even leaves the room when they vote on related matters) wants you to know that you’re all welcome here,” followed by three clown emojis.

She recorded the brief exchange with the detective who spoke with her about the post. In the video, Pacheco is heard asking, “Am I being charged with a crime?” and “So you are here to investigate a statement I allegedly made on Facebook?”

She later added, “This is freedom of speech. This is America, right?”

Pacheco said she believes the visit was politically motivated rather than a matter of public safety.

In the video, the detective is heard saying, “What we are just trying to prevent is somebody else getting agitated or agreeing with the statement, we are not saying if it’s true or not.”

“So that, to me, is a clear indication that people are not allowed to agree with anyone but the mayor and that is not how America works,” Pacheco said. “I don’t agree with him and I am going to continue to voice that.”

The encounter comes amid a national conversation about censorship and free speech, including recent debates sparked by the brief suspension of “Jimmy Kimmel Live.

While the First Amendment does not apply to private companies, it does protect Americans from government interference in speech.

In the video, the detective advised Pacheco to “refrain from posting things like that,” telling her that her comment about Meiner’s views on Palestinians “can probably incite somebody to do something radical.”

Pacheco, who said she was simply “calling out” what she described as Meiner’s “hypocrisy,” said her comments do not meet that standard.

Keep reading