Britain Launches Cross-Border Censorship Hunt Against 4chan

The UK government has taken another aggressive step in its campaign to regulate online speech, launching formal investigations into the message board 4chan and seven file-sharing sites under its far-reaching Online Safety Act.

But this is more than a domestic crackdown; it is a clear attempt to assert British speech laws far beyond its borders, targeting platforms that have no meaningful presence in the UK.

The law, which came into full force in April, gives sweeping powers to Ofcom, the UK’s communications regulator, to demand that websites and apps proactively remove undefined categories of “illegal content.”

Failure to comply can trigger massive fines of up to £18 million ($24M) or 10 percent of global revenue, criminal penalties for company executives, and site-wide bans within the UK.

Now, Ofcom has set its sights on 4chan, a US-hosted imageboard owned by a Japanese national. The site operates under US law and has no physical infrastructure, employees, or legal registration in Britain. Nonetheless, UK regulators have declared it fair game.

“Wherever in the world a service is based if it has ‘links to the UK’, it now has duties to protect UK users,” Ofcom insists.

Keep reading

Inside the secret LAPD club of Gavin Newsom’s nightmares… and their evidence a riot crisis was waiting to happen

Los Angeles cops have a private chatroom — and California‘s Democratic leaders won’t like what they’re saying.

The Instagram group ‘Defend the LAPD’ allows officers and commanders to talk freely about what’s really going on in the streets of America’s second-biggest city, where cops clash daily with anti-government rioters.

The Daily Mail gained exclusive access to the 8,500-member club and spoke to its organizers — and the views they presented were a stark rebuke to Gov Gavin Newsom and other leaders of the Democrat-run state.

Despite what their bosses say, LAPD officers broadly support the Trump administration’s deployment of the National Guard to protect federal buildings amid a wave of sometimes violent protests against immigration raids, says the group.

Members also expressed alarm at LA Mayor Karen Bass, a Democrat, for allegedly taking command of their control room, delaying the deployment of officers, and putting federal agents and the public in danger.

They also accused media outlets of one-sided coverage of the protests, by focussing on heavy-handed policing while overlooking the threat that some violent activists posed to cops and the public.

More broadly, they say the city has ‘quietly defunded’ the LAPD since the George Floyd protests of 2020, and that today’s force is understaffed, underresourced, and cannot handle the crisis exploding on the streets.

The revelations come as US Marines head to Los Angeles, as part of a federal strategy to quell the protests against immigration raids, which are a signature effort of President Donald Trump’s second term.

Keep reading

European Union Unveils International Strategy Pushing Digital ID Systems and Online Censorship

As part of a broader campaign to expand its global influence in the digital era, the European Union has introduced a sweeping International Digital Strategy that leans heavily on centralized infrastructure, digital identity systems, and regulatory frameworks that raise significant questions about online freedoms and privacy.

The European Commission, in announcing the initiative, stressed its intent to collaborate with foreign governments on a range of areas, prominently featuring digital identity systems and what it calls “Digital Public Infrastructure.”

These frameworks, which have garnered widespread support from transnational institutions such as the United Nations and the World Economic Forum, are being marketed as tools to streamline cross-border commerce and improve mobility.

However, for privacy advocates, the strategy raises red flags due to its promotion of interoperable digital ID programs and a surveillance-oriented model of governance under the guise of efficiency.

According to the strategy documents, one of the EU’s objectives is to drive mutual recognition of electronic trust services, including digital IDs, across partner nations such as Ukraine, Moldova, and several Balkan and Latin American countries. This aligns with the EU’s ambitions to propagate its model of the Digital Identity Wallet, an initiative that privacy campaigners warn could entrench government control over personal data.

The strategy also outlines measures to deepen cooperation on global digital regulation, including laws that govern online speech.

While framed as promoting “freedom of expression, democracy, and citizens’ privacy,” these efforts are closely tied to the enforcement of the Digital Services Act (DSA), which mandates extensive platform compliance and systemic risk monitoring.

Keep reading

Irish Government Admits No Free Speech Impact Assessment for “Misinformation” Laws

Irish authorities have moved ahead with extensive legislation aimed at tackling “misinformation,” yet they have not examined whether such measures might undermine free expression. The Department responsible for communications, media, and environmental policy has acknowledged that no analysis has been carried out to assess the consequences for free speech.

Responding to a media query from Gript, the Department of the Environment, Climate and Communications plainly admitted: “The Department has not undertaken any analysis or research on the potential impact of mis/disinformation laws on free speech.”

Despite this lack of evaluation, the government continues to defend its strategy. Speaking outside Government Buildings, Taoiseach (Prime Minster) Micheál Martin insisted the effort to curb online falsehoods is justified, arguing that some speech doesn’t merit protection. “It’s not freedom of speech, really, when it’s just a blatant lie and untruth, which can create a lot of public disquiet, as we have seen,” he said.

Martin downplayed the idea that regulating disinformation represents any serious threat to expressive freedoms, stating: “There are very strong protections in our constitution and in our laws and freedom of speech.” He added, “I wouldn’t overstate the impact on clamping down on blatant lies online as a sort of incursion or an undermining of freedom of speech.”

When pressed on whether the absence of impact studies was irresponsible, Martin referenced a recent RTÉ radio segment about social media claims related to a shooting in Carlow. “There was a researcher on identifying the blatant misinformation on truths and lies surrounding what happened in Carlow,” he said. “So I do think it’s absolutely important that government focuses on this issue.”

Keep reading

State criminalizes political memes, gets sued by popular satire site

The Babylon Bee, a popular satire website, has filed a lawsuit against the state of Hawaii challenging a state law that censors online content, “including political satire and parody.”

An announcement from the ADF, which is representing the publication as well as a Hawaii resident in the case, said, “The law violates fundamental free speech and due process rights by using vague and overbroad standards to punish people for posting certain political content online, including political memes and parodies of politicians.”

The ADF explained Gov. Josh Green signed S2687 into law in July 2024, and it bans the distribution of “materially deceptive media” that portrays politicians in a way that risks harming “the reputation or electoral prospects of a candidate.”

Further, the state forces satire artists to post disclaimers, destroying the purpose of satire.

“Hawaii’s war against political memes and satire is censorship, pure and simple,” said ADF lawyer Mathew Hoffmann. “Satire has served as an important vehicle to deliver truth with a smile for centuries, and this kind of speech receives the utmost protection under the Constitution. The First Amendment doesn’t allow Hawaii to choose what political speech is acceptable, and we are urging the court to cancel this unnecessary censorship.”

Seth Dillon, chief of the Bee, said, “We’re used to getting pulled over by the joke police, but comedy isn’t a crime. The First Amendment protects our right to tell jokes, whether it’s election season or not. We’ll never stop fighting to defend that freedom.”

Keep reading

France Just Redefined Global Speech on TikTok

TikTok’s decision to block the “SkinnyTok” hashtag across its entire platform followed direct intervention from the French government, revealing how national pressure is increasingly shaping global online speech, even when the content in question is not illegal.

French Digital Minister Clara Chappaz just claimed victory, celebrating the platform’s removal of the term often associated with extreme dieting and weight loss trends. “This is a first collective victory,” she wrote on X after TikTok confirmed the ban was now global.

A spokesperson for the platform stated the hashtag was removed as part of ongoing safety reviews and due to its“link to “unhealthy weight loss content.”

While the move has been portrayed as a step forward for user safety, particularly for young audiences, it also raises deeper concerns about the role of governments in controlling speech on private platforms.

The “SkinnyTok” content, though considered by some to be harmful, does not violate any laws. Still, the French government managed to pressure TikTok into removing it worldwide. This maneuver highlights a growing trend in which authorities seek to influence online content standards beyond their own borders, often using platforms as enforcers.

Rather than work through the European Commission or wait for outcomes from the ongoing investigation under the Digital Services Act (DSA), France chose to confront TikTok directly.

Keep reading

EU Tech Laws Erect Digital Iron Curtain

Over the past decades, Europe has created little of real relevance in terms of technological platforms, social networks, operating systems, or search engines.

In contrast, it has built an extensive regulatory apparatus designed to limit and punish those who have actually innovated.

Rather than producing its own alternatives to American tech giants, the EU has chosen to suffocate existing ones through regulations such as the Digital Services Act (DSA) and the Digital Markets Act (DMA).

The DSA aims to control the content and internal functioning of digital platforms, requiring the rapid removal of content deemed “inappropriate” in what amounts to a modern form of censorship, as well as the disclosure of how algorithms work and restrictions on targeted advertising. The DMA, in turn, seeks to curtail the power of so-called gatekeepers by forcing companies like Apple, Google, or Meta to open their systems to competitors, avoid self-preferencing, and separate data flows between products.

These two regulations could potentially have a greater impact on U.S. tech companies than any domestic legislation, as they are rules made in Brussels but applied to American companies in an extraterritorial manner. And they go far beyond fines: they force structural changes to the design of systems and functionalities, something that no sovereign state should be imposing on foreign private enterprise.

In April 2025, Meta was fined €200 million under the Digital Markets Act for allegedly imposing a “consent or pay” model on European users of Facebook and Instagram, without offering a real alternative. Beyond the fine, it was forced to separate data flows between platforms, thereby compromising the personalized advertising system that sustains its profitability. This was a blatant interference in its business model.

That same month, Apple was fined €500 million for preventing platforms like Spotify from informing users about alternative payment methods outside the App Store. The company was required to remove these restrictions, opening iOS to external app stores and competing payment systems. Once again, this was an unwelcome intrusion and a direct attack on the exclusivity-based model of the Apple ecosystem.

Other companies like Amazon, Google, Microsoft and even X are also under scrutiny, with the latter particularly affected by DSA rules, having been the target of a formal investigation in 2023 for alleged noncompliance in content moderation.

Keep reading

Fifth Circuit Affirms Reasonable Expectation of Privacy in Cloud Storage in Dropbox Case

A federal appeals court has ruled that state officials violated the Fourth Amendment when they orchestrated the covert retrieval of documents from a nonprofit’s Dropbox folder, an outcome that significantly strengthens legal protections for digital privacy in cloud-based environments.

In a 25-page decision issued May 28, 2025, the US Court of Appeals for the Fifth Circuit held that The Heidi Group, a Texas-based pro-life healthcare organization, had a reasonable expectation of privacy in its digital files and that a state investigator’s role in acquiring them without judicial authorization amounted to an unconstitutional search.

We obtained a copy of the decision for you here.

Writing for the court, Judge Andrew S. Oldham emphasized that the constitutional right to be free from unreasonable searches extends to “the content of stored electronic communications,” including files housed in commercial cloud platforms.

“Heidi has a reasonable expectation of privacy in its documents and files uploaded to Dropbox,” the opinion stated. “Heidi’s records are analogous to letters, phone calls, emails, and social media messages: Each contains information content transmitted through or stored with an intermediary that is not intended to ‘be broadcast to the world.’”

The controversy arose after Phyllis Morgan, a former employee of The Heidi Group, exploited her lingering access to the organization’s Dropbox folder for nearly a year after being terminated.

Rather than reporting the breach or seeking lawful channels to obtain the data, a senior investigator from the Texas Health and Human Services Commission’s Office of Inspector General (OIG), Gaylon Dacus, allegedly encouraged the ex-employee to continue accessing the nonprofit’s confidential materials and forward them to the state.

Keep reading

EU Commissioner Defends EU’s Censorship Law While Downplaying Brussels’ Indirect Influence Over Online Speech

As the European Union moves aggressively to shape online discourse through the Digital Services Act (DSA), EU Commissioner for Technology Henna Virkkunen has been deflecting scrutiny abroad, pointing fingers at the United States for what she describes as a more extensive censorship regime.

Relying on transparency data, she argues that platforms like Meta and X primarily remove content based on their own terms and conditions rather than due to DSA directives. But this framing misrepresents how enforcement works in practice, and downplays the EU’s systemic role in pushing platforms toward silence through legal design, not open decrees.

Virkkunen highlighted that between September 2023 and April 2024, 99 percent of content takedowns occurred under platform terms of service, with only 1 percent resulting from “trusted flaggers” authorized under the DSA. A mere 0.001 percent were direct orders from state authorities.

On paper, this paints a picture of platform autonomy. But in reality, the architecture of the DSA ensures that removals appear “voluntary” precisely because they are incentivized by looming regulatory consequences.

Under the DSA, platforms are held legally accountable for failing to remove certain types of content.

This liability drives a strong incentive to err on the side of over-removal, creating a culture where companies preemptively censor to minimize risk. Virkkunen frames these decisions as internal, but in truth, many of them reflect anticipatory compliance with European legal expectations.

The fact that content is flagged and removed “under T&Cs” does not indicate independence, it reflects a strategy of risk avoidance in response to EU enforcement pressure.

This dynamic is by design. The DSA doesn’t rely on high numbers of direct takedown orders from governments. Instead, it outsources content control to the platforms themselves, embedding speech restrictions in the guise of corporate policy.

The regulatory burden falls on private actors, but the agenda is shaped by Brussels. Delegating enforcement doesn’t dilute state influence; it conceals it. The veneer of decentralization does not remove the fact that the state has created the framework and exerts ongoing leverage over what platforms consider acceptable.

Keep reading

Texas Ban On Social Media For Under 18s Fails To Pass Senate

Legislation that would have banned anyone under the age of 18 from using or creating social media accounts in Texas stalled in the Senate this week after lawmakers failed to vote on it.

House Bill 186, filed by state Rep. Jared Patterson (R-Frisco), would have prohibited minors from creating accounts on social media sites such as Instagram, TikTok, Facebook, Snapchat, and others by requiring the platforms to verify users’ age.

The measure previously passed the GOP-controlled state House with broad bipartisan support in April, but momentum behind the bill slowed at the eleventh hour in the state Senate this week as lawmakers face a weekend deadline to send bills to Gov. Greg Abbott’s desk.

The legislative session ends on Monday.

In a statement on the social media platform X late Thursday, Patterson said the bill’s failure to pass in the Senate was “the biggest disappointment of my career,” adding that no other bill filed this session “would have protected more kids in more ways than this one.”

The Republican lawmaker said he believed its failure to pass meant “I’ve failed these kids and their families.”

I felt the weight of an entire generation of kids who’ve had their mental health severely handicapped as a result of the harms of social media,”  the lawmaker said. “And then there’s the others – the parents of Texas kids who’ve died as a result of a stupid social media ‘challenge’ or by suicide after being pulled down the dangerous rabbit holes social media uses to hook their users, addict them on their products, and drive them to depression, anxiety, and suicidal ideation.”

“Finally, there’s the perfectly happy and healthy teens in Texas today, who will find themselves slowly falling off the edge before the legislature meets again in 2027,” he stated.

Patterson suggested he would try and pass the measure again when the Texas Legislature meets in 2027.

House Bill 186 would have prohibited a child from entering into a contract with a social media platform to become an account holder and required platforms to verify that a person seeking to become an account holder is 18 years of age or older before allowing them to create an account.

The legislation would have also required social media platforms to delete accounts belonging to individuals under the age of 18 at a parent or guardian’s request.

Keep reading