Germany Targets X Executives in Unprecedented Criminal Probe over Refusal to Hand Over User Data in “Hate Speech” Cases

German authorities have opened a criminal investigation targeting three managers at X, accusing them of “obstruction of justice” for refusing to directly provide user data in online speech-related cases.

Two of the employees are American, and one of them is reportedly Diego de Lima Gualda, the former head of X’s operations in Brazil, who previously faced off against legal demands in his home country before resigning in April 2024.

The alleged problem for Germany is X’s policy of forwarding German requests for user data to US authorities, following procedures established under a bilateral Mutual Legal Assistance Treaty (MLAT).

That treaty lays out the legal framework for cross-border data sharing, requiring requests from German prosecutors to be reviewed and processed through US legal channels before X is compelled to hand over user information.

Despite this legally grounded process, prosecutors in Göttingen have decided to treat the policy as criminal interference, marking what appears to be the first time in German legal history that social media executives are being investigated for how they respond to international legal requests.

German prosecutors have reportedly been frustrated by X’s unwillingness to grant them direct access to account data, particularly in cases involving posts that include banned symbols like swastikas or comments that authorities allege may amount to defamation.

The inability to obtain data has resulted in stalled investigations and dropped cases, including one where a post containing a swastika could not be traced to its author.

Although X restricted that post within Germany, the company declined to release identifying information.

Keep reading

Israeli official barred from social media, minors after court hearing on child sex charge

Israeli government official Tom Alexandrovich appeared before a Henderson Court judge on Wednesday via Zoom to discuss the conditions of his bail.

The judge ruled that Alexandrovich is not allowed to have contact with minors and is not allowed to use dating apps or social media to meet with people.

Attorney Matthew Hoffmann explained that since Alexandrovich is not in the United States, it becomes difficult to implement the new conditions of his bail.

“There’s been a lot of media spotlight on this case for obvious reasons, so I think that all of that combined pressure is really the realistic way that you’re going to see the court can feel comfortable that there will be compliance,” he said.

Keep reading

Australia’s Senate Orders Release of eSafety Censorship Emails

The Australian Senate has formally ordered the production of all communications between “eSafety” Commissioner Julie Inman Grant and the Global Alliance for Responsible Media (GARM), adding to the scrutiny over the Commissioner’s role in transnational efforts to stifle online political speech.

While the contents of the emails had already come to light through a US House Judiciary Committee investigation, the Senate’s move signals a significant shift, one aimed squarely at holding a senior Australian bureaucrat accountable for her coordination with a foreign activist group pushing to censor views, including those of US President Donald Trump.

Senator Alex Antic, who introduced the motion, confirmed its passage on Wednesday afternoon, posting: “The Senate has voted in favour of my order for production of documents relating to communications between the Office of the eSafety Commissioner and the Global Alliance for Responsible Media.”

Keep reading

Mississippi’s Digital ID Law Hits a Wall with Mastodon

Mississippi’s privacy-ruining online digital ID law is putting pressure on decentralized platforms, and Mastodon says it simply cannot comply.

The organization behind the software states that it lacks the technical ability to verify users’ ages and refuses to implement IP-based restrictions, which it argues would wrongly affect travelers and those temporarily located in the state.

The law, known as the Walker Montgomery Protecting Children Online Act (HB 1126), has already led to Bluesky withdrawing its service from Mississippi.

Mastodon is not following that path. Instead, it points to the design of its platform, where individual server administrators are responsible for their own compliance with local laws. Mastodon itself neither collects user data nor maintains centralized control over the network.

Although Mastodon’s nonprofit arm initially declined to comment, it later provided a statement to TechCrunch.

The organization explained that while its own servers require users to be at least 16, it does not “have the means to apply age verification” and that the software does not retain any data collected during sign-up.

A feature added in the July 2025 release of Mastodon 4.4 allows server administrators to set age minimums and manage legal terms, but does not support storing verification data.

Each server in the network operates independently. It is up to those server owners to decide whether to integrate third-party systems to check user ages.

Mastodon confirmed it cannot offer “direct or operational assistance” to these operators and instead points them to resources such as the IFTAS library, which provides guidance on trust and safety practices for federated platforms.

The nonprofit reiterated that it does not track user behavior or enforce policy across the wider ecosystem. Responsibility for legal compliance, it says, belongs to those who host and manage the servers in their own jurisdictions.

Keep reading

UK free speech crackdown sees up to 30 people a day arrested for petty offenses such as retweets and cartoons

Bernadette Spofforth lay in jail on a blue gym mattress in a daze, finding it difficult to move, even breathe.

“I just closed down. But the other half of my brain went into Jack Reacher mode,” she said, referring to the fictional action hero. “Every single detail was in this very vivid, bright, sharp focus.”

She remembers noticing that you can’t drown yourself in the toilet, because there’s no standing water in it and the flush button is too far to reach if your head were in the bowl.  

She’d end up being detained for 36 hours in July 2024. Three girls had just been murdered in Southport, England, at a Taylor Swift-themed dance party. But Spofforth was not under suspicion for the crime.

Instead, horrified, and in the fog of a developing tragedy, she’d reposted on X another user’s content blaming newly arrived migrants for the ghastly crime — clarifying in her retweet, “If this is true.”

Hours later she realized she may have received bad information and deleted the post — but it had already been seen thousands of times. 

The murders resulted in widespread civil unrest in the UK, where mass migration is a central issue for citizens. Four police vehicles arrived at her home days later. Spofforth, 56, a successful businesswoman from Chester, was placed under arrest.

“We’re a year on now and I can honestly tell you that I don’t think I will ever recover,” she told The Post. “I don’t mean that as a victim. Those poor children were victims. But I will never trust anything the authorities say to me ever again.”

Her story is one repeated almost hourly in the UK, where data suggests over 30 people a day are arrested for speech crimes, about 12,000 a year, under laws written well before the age of social media that make crimes of sending “grossly offensive” messages or sharing content of an “indecent, obscene or menacing character.”

Social media continues to be flooded with videos of British cops banging on doors in the middle of the night and hauling parents off to jail—all over mean Facebook posts and agitated words on X.

Keep reading

Soros-Funded Dark Money Group Secretly Paying Democrat Influencers To Shape Gen Z Politics

When Taylor Lorenz breaks a story on Democrat dark money, you know something strange is happening. Lorenz, who has made a career as the poster child for progressive social media culture, finally turned her reporting lens onto her own side. And what she uncovered in Wired is pretty dark: a secret program bankrolled by one of the largest Democrat dark money machines in America, designed to quietly pay off dozens of high-profile influencers to steer young voters toward the left.

The story centers around the Sixteen Thirty Fund, one of the crown jewels of Arabella Advisors’ dark money empire. According to public filings, this fund has been showered with staggering sums from progressive megadonors:

  • $257.1 million from the New Venture Fund
  • $64 million from the Open Societies Action Fund 
  • $20.2 million from the Hopewell Fund
  • $13 million from the North Fund
  • $5.6 million from Tides Advocacy

A spreadsheet posted by DataRepublican on X bluntly spelled it out: “That ‘dark money’ group, Sixteen Thirty Fund, is Arabella Advisors and is pure Open Society passthrough.”

Keep reading

Inside Dem Dark Money Behemoth Arabella Advisors’ Failed Attempt To Create an Astroturf Influencer Army

The first rule of Fight Club is you do not talk about Fight Club. The Sixteen Thirty Fund, an offshoot of the left-wing dark money behemoth Arabella Advisors, tried to enforce that dictum when recruiting an army of handsomely paid left-wing influencers to spout Democratic talking points through an effort called “Chorus.”

Contracts reviewed by Wired stipulated that they weren’t supposed to reveal their affiliation with the Sixteen Thirty Fund or tell anybody they were being paid to mouth Democratic Party shibboleths. Presumably that includes complaining to reporters about the stringent terms of the contract and the astroturf nature of the project to “build new infrastructure to fund independent progressive voices online at scale.” Oops.

According to Wired, some of the online Left’s biggest names—including Olivia Julianna, who spoke at the 2024 Democratic National Convention; the “nonbinary content creator” Adesso Laurenzo, who boasts nearly one million TikTok followers; and Aaron Parnas, a social media journalist described by Rolling Stone as “a sort of 20-something Walter Cronkite”—expressed interest. Then they read Chorus’s proposed contract. It included the following terms, according to Wired:

  • Influencers cannot disclose their affiliation with Chorus or the Sixteen Thirty Fund.
  • Influencers cannot disclose “the identity of any Funder” or reveal they’re being paid.
  • Influencers “must funnel all bookings with lawmakers and political leaders through Chorus,” even those organized independently.
  • Influencers cannot use their monthly stipend “to make content that supports or opposes any political candidate or campaign without express authorization from Chorus in advance and in writing.”
  • Influencers must attend “regular advocacy trainings,” “daily messaging check-ins,” and biweekly “newsroom” events with lawmakers and other figures.
  • Influencers must remove content created at said events if Chorus requests them to do so.

Chorus gave the influencers two days to sign the contract and barred prospective affiliates from enlisting their lawyers to request changes.

On a Zoom call with the influencers, a partner at Democratic fixer Marc Elias’s Elias Law Group, Graham Wilson, boasted that “housing” Chorus through a nonprofit gave them “some real great advantages.”

“It gives us the ability to raise money from donors,” he said, according to Wired. “It also, with this structure, it avoids a lot of the public disclosure or public disclaimers—you know, ‘Paid for by blah blah blah blah’—that you see on political ads. We don’t need to deal with any of that. Your names aren’t showing up on, like, reports filed with the FEC.” (Elias Law Group made national headlines when it threatened to stop work for longtime client Media Matters if it didn’t fork over $2.25 million in unpaid bills.)

Many of the influencers approached to join Chorus expressed concerns over the setup in a group chat. “Nonbinary content creator” Laurenzo floated sending a “joint email” requesting changes, while a “reproductive justice influencer named Pari” said there were “at least 4 other things that should change.”

Ultimately, most of them fell in line. “I don’t feel strongly about pushing tbh,” wrote Parnas, the young Cronkite. “They aren’t going to modify it anymore. Seems like a take it or leave it.”

Keep reading

Potential DC school shooter arrested with guns after social media threat: ATF, MTPD report

Agents from the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) and the Metro Transit Police Department (MTPD) say that they stopped a school shooting in D.C. just one day after a mass shooter opened fire on a mass at a Minneapolis Catholic school.

The investigation began with an “alarming” social media post referencing a potential threat to a DC public school, according to MTPD’s Criminal Investigation Division. They have not yet revealed which school was targeted.

On August 27, MTPD and ATF conducted a search warrant at a District residence where multiple firearms were recovered, and a teen was placed under arrest.

“As part of our participation in a longstanding ATF task force, we’re proud of our officers who disrupted this significant public safety threat,” a MTPD spokesperson said. “We are focused on keeping our Metro system and community safe across the region.”

Keep reading

Democrat Policies Are so Unpopular That a Dark Money Group is Paying Progressive Influencers $8K a Month to Push Them

A dark money group on the left is paying progressive influencers up to $8,000 a month to push Democrat policy ideas. Must be nice.

This is so typical. For all of the Democrats’ talk of getting ‘big money’ out of politics, they are the worst culprits of using big money to advance their agenda when it suits them.

The New York Post reports:

‘Dark money’ group paying pro-Dem influencers up to $8K a month: report

A secretive dark money group tied to the Democratic Party is paying online influencers up to $8,000 a month to disseminate left-leaning talking points, according to a report.

The “Chorus Creator Incubator Program” is said to be funded by the “Sixteen Thirty Fund,” a nonprofit sometimes portrayed as the left’s answer to the Koch network and which has funneled money to dozens of Democratic-friendly influencers, according to WIRED magazine.

The names attached to the program span some of the most recognizable liberal voices online.

They include Olivia Julianna, the Gen Z activist who spoke at the 2024 Democratic National Convention; Loren Piretra, a former Playboy executive turned Occupy Democrats YouTuber; and Barrett Adair, the content creator who runs a viral American Girl Doll–themed meme account.

The program also includes Suzanne Lambert, who styles herself a “Regina George liberal”; Arielle Fodor, a teacher with 1.4 million TikTok followers; Sander Jennings, the TLC reality star and older brother of trans influencer Jazz Jennings; and David Pakman, host of a YouTube show.

More from the story in Wired:

“There are some real great advantages to … housing this program in a nonprofit,” Graham Wilson, a lawyer working with Chorus, said to creators on a Zoom call reviewed by WIRED. “It gives us the ability to raise money from donors. It also, with this structure, it avoids a lot of the public disclosure or public disclaimers—you know, ‘Paid for by blah blah blah blah’—that you see on political ads. We don’t need to deal with any of that. Your names aren’t showing up on, like, reports filed with the FEC.” (Wilson did not reply to a request for comment.)

Keep reading

Should the Government Restrict ‘Harmful’ Speech Online?

The First Amendment prohibits the federal government from suppressing speech, including speech it deems “harmful,” yet lawmakers keep trying to regulate online discourse.

Over the summer, the Senate passed the Kids Online Safety Act (KOSA), a bill to allegedly protect children from the adverse effects of social media. Senate Majority Leader Chuck Schumer took procedural steps to end the debate and quickly advance the bill to a floor vote. According to Schumer, the situation was urgent. In his remarks, he focused on the stories of children who were targets of bullying and predatory conduct on social media. To address these safety issues, the proposed legislation would place liability on online platforms, requiring them to take “reasonable” measures to prevent and mitigate harm.

It’s now up to the House to push the bill forward to the President’s desk. After initial concerns about censorship, the House Committee on Energy and Commerce advanced the bill in September, paving the way for a final floor vote.

KOSA highlights an ongoing tension between free speech and current efforts to make social media “safer.” In its persistent attempts to remedy social harm, the government shrinks what is permissible to say online and assumes a role that the First Amendment specifically guards against.

At its core, the First Amendment is designed to protect freedom of speech from government intrusion. Congress is not responsible for determining what speech is permissible or what information the public has the right to access. Courts have long held that all speech is protected unless it falls within certain categories. Prohibitions against harmful speech—where “harmful” is determined solely by lawmakers—are not consistent with the First Amendment.

But bills like KOSA add layers of complexity. First, the government is not simply punishing ideological opponents or those with unfavorable viewpoints, which would clearly violate the First Amendment. When viewed in its best light, KOSA is equally about protecting children and their health. New York had similar public health and safety justifications for its controversial hate speech law, which was blocked by a district court and is pending appeal. Under this argument, which is often cited to rationalize speech limitations, the dangers to society are so great that the government should take action to protect vulnerable groups from harm. However, the courts have generally ruled that this is not sufficient justification to limit protected speech.

In American Booksellers Association v. Hudnut (1986), Judge Frank Easterbrook evaluated the constitutionality of a pornography prohibition enacted by the City of Indianapolis. The city reasoned that pornography has a detrimental impact on society because it influences attitudes and leads to discrimination and violence against women. As Judge Easterbrook wrote in his now-famous opinion, just because speech has a role in social conditioning or contributes loosely to social harm does not give the government license to control it. Such content is still protected, however harmful or insidious, and any answer to the contrary would allow the government to become the “great censor and director of which thoughts are good for us.”

In addition to the protecting children argument, a second layer of complexity is that KOSA enables censorship through roundabout means. The government accomplishes what it is barred from doing under the First Amendment by requiring online platforms to police a vast array of harms or risk legal consequences. This is a common feature of recent social media bills, which place the responsibility on platforms.

Practically, the result is inevitably less speech. Under KOSA, the platform has a “duty of care” to mitigate youth anxiety, depression, eating disorders, and addiction-like behaviors. While this provision focuses on the covered entity’s design and operation, it necessarily implicates speech since social media platforms are built around user-generated posts, from content curation to notifications. Because platforms are liable for falling short of the “duty of care,” this requirement is bound to sweep up millions of posts that are protected speech, even ordinary content that may trigger the enumerated harm. While the platform would technically be the entity implementing these policies, the government would be driving content removal.

Keep reading