France opens criminal probe into X algorithms under Musk

A French prosecutor has opened a criminal investigation into social platform X and its owner, Elon Musk, on accusations of “creating bias in France’s democratic debate.”

The investigation comes after Musk’s artificial intelligence (AI) company, xAi, deleted multiple posts from its chatbot Grok that included antisemitic comments. Among them, Grok called itself “MechaHitler” and insinuated that the Jewish people were controlling Hollywood.

French National Assembly member Thierry Sother and European Union Parliament member Pierre Jouvet asked Arcom, France’s digital content regulator, to look into Grok’s behavior Thursday. 

“Since the July 4th update, Grok has substantially changed behaviors leading it to comment antisemitic ideas, to praise Hitler and even to support Le Pen,” Sother said to French media Libération.

X has not immediately responded to requests for comment.

X and Musk have been on French and European radars since January when Éric Bothorel, a French parliamentarian, raised concerns over X’s use of personal data, a biased algorithm and the reduction of diversity in posts. 

He also denounced Musk’s personal interference within the platform, calling it “a true danger and a threat for our democracies,” according to Libération.  

Keep reading

This Newly Implemented Online Speech Code Just Gave European Censors Another Weapon

Under the shadow of the European Union’s Digital Services Act censorship regime, Europeans already face fines, raids, and arrests for their social media posts, but starting July 1, the Code of Conduct on Disinformation has the force of law. The once-voluntary “code,” a 56-page document that spells out censorship strategies, is now an enforceable “benchmark” that the EU can use to measure tech companies’ censorship regimes.

The Code requires large online platforms to meet “tougher transparency and auditing obligations aimed at stamping out disinformation,” according to Tech Policy Press. Previously, the Code operated as a “self-regulatory framework” for tech companies before the EU “endorsed” its “integration” into the DSA.

The DSA “regulates online intermediaries and platforms” to police so-called “disinformation.” Under the law, which went into full effect last year, tech companies like Google, Meta, Microsoft, and X are required to undergo independent audits that “assess” their management of “disinformation risks,” Tech Policy Press reported. The Code commitments will act as “benchmarks” for these assessments, where applicable.

In April of 2023, the EU designated 19 large tech companies required to comply with the DSA. All of these companies serve more than 45 million monthly users in the EU, and 14 of them are U.S.-based, Alliance Defending Freedom Senior Counsel Jeremy Tedesco told The Federalist. As of June, the Commission is still “supervising” these tech companies under the DSA.

“The EU is trying to impose its draconian, very restrictive free speech regime on the world,” Tedesco said in an interview.

Full adherence to the Code is now a “marker of DSA compliance” for companies, according to Tech Policy Press. “While signing on [to the Code] remains optional,” “failing to adhere to its commitments may now trigger investigations or fines.”

Keep reading

Sweden Cracks Down On OnlyFans – Will U.S. Follow Suit?

The X-rated social media platform OnlyFans is experiencing real growth, with revenue, content, and user numbers all on the rise. The site’s over 4 million “creators” sell content – including images, videos, and personalized chats – to more than 300 million subscribers, or “fans.” It’s primarily a sex site, and claims that the platform isn’t powered by porn are usually accompanied by winks and nods to the contrary.

OnlyFans keeps a 20% cut of what users pay, boasting $1.3 billion of revenue in 2023. It’s a lucrative approach to monetizing porn consumption, but the platform just hit a legal roadblock in a seemingly unlikely country.

Sweden, which in 1971 became the second country in the world to formally legalize all forms of pornography, has not been as soft on prostitution. In 1999, the country criminalized the purchase of sex, but not the sale, in efforts to protect vulnerable women from facing stiff legal consequences.

That policy will now apply to the virtual world. As of July 1, Swedes could face up to a year in prison for paying someone for personalized online sexual services, including sexting and video content. The new law also criminalizes promoting or profiting from others who perform sex acts for payment on demand, forcing OnlyFans to pull out of Sweden.

In a country known for libertines more than prudes, the law passed with broad, cross-party support. “The idea is that anyone who buys sexual acts performed remotely should be penalized in the same way as those who buy sexual acts involving physical contact,” said Gunnar Strommer, Sweden’s Justice Minister and a member of the Moderate party.

Keep reading

UK: Ofcom-Backed Study Could Be Part of a Push to Extend “Impartiality” Rules to Online Media

A government-funded research campaign spearheaded by Ofcom and Cardiff University will raise red flags among free speech advocates, as it aims to scrutinize the so-called impartiality of political news coverage across UK media.

This expansive project, backed by £755,625 ($1,028M) in public funds from the Arts and Humanities Research Council (AHRC), specifically includes online and broadcast outlets and coincides with the run-up to the 2024/25 general election.

Though framed as an academic endeavor, the collaboration involves not only researchers but a cadre of mainstream broadcasters with longstanding ties to government regulation.

These include the BBC, ITV, Channel 4, Channel 5, Sky News, and ITN. Ofcom, the UK’s communications regulator and enforcer of the wide-reaching and controversial online censorship law, the Online Safety Act, is a central partner.

The project description emphasizes “challenging but urgent questions” about how political coverage is presented to the public and hints at future interventions under the pretense of raising editorial standards.

Cardiff University, which is leading the study, openly states that it intends to “identify where editorial standards can be raised to better inform…audiences.”

While no explicit call for regulatory changes is made, the language closely mirrors the justification often used to extend oversight, particularly toward newer or nonconforming media platforms.

The announcement states that “accusations of so-called media bias abound, often fuelled by edited clips circulating across online and social media platforms rather than scientific studies of news reporting.”

Impartiality rules enforced by Ofcom have repeatedly been used as tools to investigate and sanction broadcasters like GB News and TalkTV which have a large online presence.

Keep reading

Congress Exposes Government-Corporate Collusion Behind Censorship of Conservative Voices

A House Judiciary Committee investigation led by Chairman Jim Jordan has exposed what may be the largest government-coordinated censorship operation in U.S. history. Over two years, the committee has documented how federal agencies, major corporations, universities, and even foreign governments colluded to silence conservative voices, manipulate public discourse, and erode the First Amendment, what investigators now call the “Censorship-Industrial Complex.”

The investigation, which began with social media platforms, has now expanded to include artificial intelligence. In March 2025, Jordan sent letters to 16 major tech companies, including Google, Apple, Microsoft, OpenAI, and Anthropic, demanding documents related to potential Biden administration pressure to censor lawful speech in AI systems. The committee is investigating whether the administration “coerced or colluded” with AI firms to suppress content, marking a significant new front in the censorship inquiry.

Evidence from tens of thousands of internal emails and documents obtained via congressional subpoenas reveals a coordinated censorship campaign targeting dissenting views on everything from COVID-19 vaccines to the 2020 election. At the center was the Global Alliance for Responsible Media (GARM), an initiative of the World Federation of Advertisers whose members control nearly $1 trillion in annual ad spending, about 90% of the global market.

House investigators describe GARM as an “advertising cartel” that used ad boycotts, content moderation, and “disinformation” labels to defund conservative outlets and pressure platforms into compliance. Internal communications show GARM co-founder Robert Rakowitz privately called silencing President Trump his “main thing” and compared his speech to a “contagion” that needed containment.

Investigators found direct coordination with foreign regulators, including the European Commission and Australia’s eSafety Commissioner. In one message, a European official urged advertisers to “push Twitter to deliver on GARM asks.” Australia’s Julie Inman Grant praised GARM’s “significant collective power” and asked for updates to guide her office’s regulatory decisions.

Internal emails show GARM members openly admitting they “hated the ideology” of conservative outlets like Fox News, The Daily Wire, and Breitbart. GroupM, the world’s largest media buying agency and a GARM Steer Team member, put The Daily Wire on a “Global High Risk exclusion list” under “Conspiracy Theories,” without citing any conspiracy content.

Perhaps most revealing was GARM’s pressure campaign against Spotify over Joe Rogan. When Rogan suggested young, healthy people might not need COVID vaccines, GARM threatened to pull ads across all of Spotify. Yet GroupM didn’t even advertise on Rogan’s show, proving this wasn’t about brand safety but ideological control.

GARM collapsed in August 2024 after X (formerly Twitter) sued for antitrust violations. The World Federation of Advertisers claimed they lacked the resources to defend the case, effectively admitting defeat.

Keep reading

Government REFUSES to release ‘eSafety’ data behind YouTube kids ban

Labor Communications Minister Anika Wells has refused to release the research that underpins the eSafety Commissioner’s push to ban 15-year-olds from using YouTube.

The contentious recommendation, made by eSafety Commissioner Julie Inman Grant, has sparked widespread concern among stakeholders and the public. Yet Wells has declined to release the data informing the advice, citing the regulator’s preference to delay publication.

Sky News reports that the eSafety regulator has repeatedly blocked its attempts to access the full research, instead opting to “drip feed” select findings to the public over several months. This is despite the Albanese government expected to make a final decision in just weeks.

A spokesperson for Wells said: “The minister is taking time to consider the eSafety Commissioner’s advice. The minister has been fully briefed by the eSafety Commissioner including the research methodology behind her advice.”

However, the Commissioner’s own “Keeping Kids Safe Online: Methodology” report reveals several weaknesses in the data. The survey relied entirely on self-reported responses taken at one point in time and used “non-probability-based sampling” from online panels, described in the report as “convenience samples”.

Keep reading

‘Delete, Silence, Abolish’: America’s estranged allies ramp up perceived censorship, speech rules

Overt government control of the internet is expanding within America’s increasingly estranged allies and threatening to spill over national boundaries, likely renewing earlier confrontations with Vice President JD Vance, Secretary of State Marco Rubio and the world’s richest man and creator of America’s newest nascent political party.

The European Union last week made its officially voluntary three-year-old “Code of Practice on Disinformation” legally binding under the Digital Services Act. It’s now a “Code of Conduct” to be used as a “relevant benchmark for determining DSA compliance” for Facebook, Instagram, LinkedIn, Bing, TikTok, YouTube and Google Search.

These “very large” online platforms and online search engines were already signatories of the 2022 code, whose commitments include taking “stronger measures to demonetise disinformation,” increasing fact-checking across the EU and its languages and improved reduction of “current and emerging manipulative behaviour.”

Australia imposed an age-verification law for harmful content that makes the Texas law recently upheld by the Supreme Court look like a type-your-age prompt, applying to not only pornography but also “violent content” and “themes of suicide, self-harm and disordered eating,” in the words of eSafety Commissioner Julie Inman Grant.

Last week she registered three of nine “codes” submitted by the online industry, covering “search engine services … enterprise hosting services and internet carriage services such as telcos,” and has sought “additional safety commitments” on remaining codes for “app stores, device manufacturers, social media services and messaging” and broader categories.

The same day, Canada suspended a U.S. tech firm tax to avoid trade recriminations from the Trump administration. Justice Minister Sean Fraser told the Canadian Press that Prime Minister Mark Carney’s government is taking a “fresh” look at predecessor Justin Trudeau’s proposed Online Harms Act, which went down in Trudeau’s political downfall.

Anti-censorship group Reclaim the Net flagged pressure on Carney’s government to revive C-63, which famed Canadian psychologist Jordan Peterson claims would criminalize wrongthink. Trudeau-appointed Senator Kristopher Wells pressed Government Representative Marc Gold to commit to further criminalizing “hate” in a “questions period” last month.

Keep reading

UN Launches Task Force to Combat Global “Disinformation” Threat

The United Nations has unveiled its first Global Risk Report, placing what it terms “mis- and disinformation” among the most serious threats facing the world.

Tucked into the report is the announcement of a new task force, formed to address how unauthorized narratives might disrupt the UN’s ability to carry out its programs, particularly its centerpiece initiative, the 2030 Agenda.

Rather than encouraging open discourse or transparency, the organization has taken a route that centers on managing what information gets seen and heard.

While the language used suggests a concern for public welfare, the actual emphasis lies on shielding the UN’s agenda from interference.

According to the report, survey respondents that included member states, NGOs, private companies, and other groups overwhelmingly called for joint government action and multistakeholder coalitions to deal with the highlighted risks.

Yet there is no clear endorsement of more open communication or free expression. The dominant solution appears to be top-down control over public narratives.

This newly established task force has a single focus. Its job is to assess how so-called mis- and disinformation affect the UN’s ability to deliver on its goals.

The report does not describe how this benefits the public or strengthens democratic values. Instead, the team’s mission is about insulating UN operations from disruption, particularly as they pertain to the Sustainable Development Goals.

The SDGs, which make up the foundation of the 2030 Agenda, touch nearly every aspect of governance and development, from climate to education to healthcare.

Keep reading

Bombshell report exposes attempts by Muslim Council of Britain group to censor UK media

The Muslim Council of Britain’s media monitoring unit “acted in bad faith” by trying to suppress accurate reporting about terrorism and risks curtailing press freedom, a bombshell report has claimed.

Policy Exchange tonight released its 94-page report, titled ‘Bad Faith Actor: A study of the Centre for Media Monitoring’, which exposed the organisation’s inadequate methods of documenting Islamophobia and its partisan agenda.

Despite the CfMM claiming that 60 per cent of stories about Muslims are “offending” and negative, Policy Exchange found that just one complaint made by the group resulted in a newspaper being required to make a correction.

Policy Exchange revealed that CfMM, which sat on a working group at press regulator Ipso, counted factual reports of Islamist terror attacks in its 60 per cent figure of Islamophobic journalism, including a Manchester terror attack report by agency AP that accurately used the phrase “knife-wielding man yelling Islamic slogans”.

Keep reading

Canada Eyes Revival of Online Censorship Bill

As Canada’s government hints at reviving its shelved Online Harms Bill, concerns are mounting that this could signal a renewed assault on free speech. The legislation, once known as Bill C-63, had been left behind when Parliament was prorogued earlier this year.

Now, under Prime Minister Mark Carney, the Liberals appear ready to give their controversial plan another try, leaving civil liberties groups on high alert.

The Democracy Fund (TDF), a leading voice in the fight for free expression, has been quick to sound the alarm. Mark Joseph, TDF’s litigation director, argues that no sweeping new regime is necessary.

“There are laws in place that the government can, and does, use to address most of the bad conduct that the Bill ostensibly targeted,” he pointed out.

In Joseph’s view, any genuine gaps in the Criminal Code could be addressed with targeted amendments, rather than broad measures that risk suffocating debate.

“The previous Bill C-63 sought to implement a regime of mass censorship,” he warned, adding that TDF remains determined to resist efforts to criminalize speech and punish lawful debate.

The government, for its part, insists it is simply reassessing its approach. Justice Minister Sean Fraser has described the current review as a “fresh look” at how best to address online harms.

But for those who value open dialogue, such language offers little comfort, raising fears of government overreach cloaked in promises of safety.

Keep reading