Mississippi’s Digital ID Law Hits a Wall with Mastodon

Mississippi’s privacy-ruining online digital ID law is putting pressure on decentralized platforms, and Mastodon says it simply cannot comply.

The organization behind the software states that it lacks the technical ability to verify users’ ages and refuses to implement IP-based restrictions, which it argues would wrongly affect travelers and those temporarily located in the state.

The law, known as the Walker Montgomery Protecting Children Online Act (HB 1126), has already led to Bluesky withdrawing its service from Mississippi.

Mastodon is not following that path. Instead, it points to the design of its platform, where individual server administrators are responsible for their own compliance with local laws. Mastodon itself neither collects user data nor maintains centralized control over the network.

Although Mastodon’s nonprofit arm initially declined to comment, it later provided a statement to TechCrunch.

The organization explained that while its own servers require users to be at least 16, it does not “have the means to apply age verification” and that the software does not retain any data collected during sign-up.

A feature added in the July 2025 release of Mastodon 4.4 allows server administrators to set age minimums and manage legal terms, but does not support storing verification data.

Each server in the network operates independently. It is up to those server owners to decide whether to integrate third-party systems to check user ages.

Mastodon confirmed it cannot offer “direct or operational assistance” to these operators and instead points them to resources such as the IFTAS library, which provides guidance on trust and safety practices for federated platforms.

The nonprofit reiterated that it does not track user behavior or enforce policy across the wider ecosystem. Responsibility for legal compliance, it says, belongs to those who host and manage the servers in their own jurisdictions.

Keep reading

UK free speech crackdown sees up to 30 people a day arrested for petty offenses such as retweets and cartoons

Bernadette Spofforth lay in jail on a blue gym mattress in a daze, finding it difficult to move, even breathe.

“I just closed down. But the other half of my brain went into Jack Reacher mode,” she said, referring to the fictional action hero. “Every single detail was in this very vivid, bright, sharp focus.”

She remembers noticing that you can’t drown yourself in the toilet, because there’s no standing water in it and the flush button is too far to reach if your head were in the bowl.  

She’d end up being detained for 36 hours in July 2024. Three girls had just been murdered in Southport, England, at a Taylor Swift-themed dance party. But Spofforth was not under suspicion for the crime.

Instead, horrified, and in the fog of a developing tragedy, she’d reposted on X another user’s content blaming newly arrived migrants for the ghastly crime — clarifying in her retweet, “If this is true.”

Hours later she realized she may have received bad information and deleted the post — but it had already been seen thousands of times. 

The murders resulted in widespread civil unrest in the UK, where mass migration is a central issue for citizens. Four police vehicles arrived at her home days later. Spofforth, 56, a successful businesswoman from Chester, was placed under arrest.

“We’re a year on now and I can honestly tell you that I don’t think I will ever recover,” she told The Post. “I don’t mean that as a victim. Those poor children were victims. But I will never trust anything the authorities say to me ever again.”

Her story is one repeated almost hourly in the UK, where data suggests over 30 people a day are arrested for speech crimes, about 12,000 a year, under laws written well before the age of social media that make crimes of sending “grossly offensive” messages or sharing content of an “indecent, obscene or menacing character.”

Social media continues to be flooded with videos of British cops banging on doors in the middle of the night and hauling parents off to jail—all over mean Facebook posts and agitated words on X.

Keep reading

Soros-Funded Dark Money Group Secretly Paying Democrat Influencers To Shape Gen Z Politics

When Taylor Lorenz breaks a story on Democrat dark money, you know something strange is happening. Lorenz, who has made a career as the poster child for progressive social media culture, finally turned her reporting lens onto her own side. And what she uncovered in Wired is pretty dark: a secret program bankrolled by one of the largest Democrat dark money machines in America, designed to quietly pay off dozens of high-profile influencers to steer young voters toward the left.

The story centers around the Sixteen Thirty Fund, one of the crown jewels of Arabella Advisors’ dark money empire. According to public filings, this fund has been showered with staggering sums from progressive megadonors:

  • $257.1 million from the New Venture Fund
  • $64 million from the Open Societies Action Fund 
  • $20.2 million from the Hopewell Fund
  • $13 million from the North Fund
  • $5.6 million from Tides Advocacy

A spreadsheet posted by DataRepublican on X bluntly spelled it out: “That ‘dark money’ group, Sixteen Thirty Fund, is Arabella Advisors and is pure Open Society passthrough.”

Keep reading

Inside Dem Dark Money Behemoth Arabella Advisors’ Failed Attempt To Create an Astroturf Influencer Army

The first rule of Fight Club is you do not talk about Fight Club. The Sixteen Thirty Fund, an offshoot of the left-wing dark money behemoth Arabella Advisors, tried to enforce that dictum when recruiting an army of handsomely paid left-wing influencers to spout Democratic talking points through an effort called “Chorus.”

Contracts reviewed by Wired stipulated that they weren’t supposed to reveal their affiliation with the Sixteen Thirty Fund or tell anybody they were being paid to mouth Democratic Party shibboleths. Presumably that includes complaining to reporters about the stringent terms of the contract and the astroturf nature of the project to “build new infrastructure to fund independent progressive voices online at scale.” Oops.

According to Wired, some of the online Left’s biggest names—including Olivia Julianna, who spoke at the 2024 Democratic National Convention; the “nonbinary content creator” Adesso Laurenzo, who boasts nearly one million TikTok followers; and Aaron Parnas, a social media journalist described by Rolling Stone as “a sort of 20-something Walter Cronkite”—expressed interest. Then they read Chorus’s proposed contract. It included the following terms, according to Wired:

  • Influencers cannot disclose their affiliation with Chorus or the Sixteen Thirty Fund.
  • Influencers cannot disclose “the identity of any Funder” or reveal they’re being paid.
  • Influencers “must funnel all bookings with lawmakers and political leaders through Chorus,” even those organized independently.
  • Influencers cannot use their monthly stipend “to make content that supports or opposes any political candidate or campaign without express authorization from Chorus in advance and in writing.”
  • Influencers must attend “regular advocacy trainings,” “daily messaging check-ins,” and biweekly “newsroom” events with lawmakers and other figures.
  • Influencers must remove content created at said events if Chorus requests them to do so.

Chorus gave the influencers two days to sign the contract and barred prospective affiliates from enlisting their lawyers to request changes.

On a Zoom call with the influencers, a partner at Democratic fixer Marc Elias’s Elias Law Group, Graham Wilson, boasted that “housing” Chorus through a nonprofit gave them “some real great advantages.”

“It gives us the ability to raise money from donors,” he said, according to Wired. “It also, with this structure, it avoids a lot of the public disclosure or public disclaimers—you know, ‘Paid for by blah blah blah blah’—that you see on political ads. We don’t need to deal with any of that. Your names aren’t showing up on, like, reports filed with the FEC.” (Elias Law Group made national headlines when it threatened to stop work for longtime client Media Matters if it didn’t fork over $2.25 million in unpaid bills.)

Many of the influencers approached to join Chorus expressed concerns over the setup in a group chat. “Nonbinary content creator” Laurenzo floated sending a “joint email” requesting changes, while a “reproductive justice influencer named Pari” said there were “at least 4 other things that should change.”

Ultimately, most of them fell in line. “I don’t feel strongly about pushing tbh,” wrote Parnas, the young Cronkite. “They aren’t going to modify it anymore. Seems like a take it or leave it.”

Keep reading

Potential DC school shooter arrested with guns after social media threat: ATF, MTPD report

Agents from the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) and the Metro Transit Police Department (MTPD) say that they stopped a school shooting in D.C. just one day after a mass shooter opened fire on a mass at a Minneapolis Catholic school.

The investigation began with an “alarming” social media post referencing a potential threat to a DC public school, according to MTPD’s Criminal Investigation Division. They have not yet revealed which school was targeted.

On August 27, MTPD and ATF conducted a search warrant at a District residence where multiple firearms were recovered, and a teen was placed under arrest.

“As part of our participation in a longstanding ATF task force, we’re proud of our officers who disrupted this significant public safety threat,” a MTPD spokesperson said. “We are focused on keeping our Metro system and community safe across the region.”

Keep reading

Democrat Policies Are so Unpopular That a Dark Money Group is Paying Progressive Influencers $8K a Month to Push Them

A dark money group on the left is paying progressive influencers up to $8,000 a month to push Democrat policy ideas. Must be nice.

This is so typical. For all of the Democrats’ talk of getting ‘big money’ out of politics, they are the worst culprits of using big money to advance their agenda when it suits them.

The New York Post reports:

‘Dark money’ group paying pro-Dem influencers up to $8K a month: report

A secretive dark money group tied to the Democratic Party is paying online influencers up to $8,000 a month to disseminate left-leaning talking points, according to a report.

The “Chorus Creator Incubator Program” is said to be funded by the “Sixteen Thirty Fund,” a nonprofit sometimes portrayed as the left’s answer to the Koch network and which has funneled money to dozens of Democratic-friendly influencers, according to WIRED magazine.

The names attached to the program span some of the most recognizable liberal voices online.

They include Olivia Julianna, the Gen Z activist who spoke at the 2024 Democratic National Convention; Loren Piretra, a former Playboy executive turned Occupy Democrats YouTuber; and Barrett Adair, the content creator who runs a viral American Girl Doll–themed meme account.

The program also includes Suzanne Lambert, who styles herself a “Regina George liberal”; Arielle Fodor, a teacher with 1.4 million TikTok followers; Sander Jennings, the TLC reality star and older brother of trans influencer Jazz Jennings; and David Pakman, host of a YouTube show.

More from the story in Wired:

“There are some real great advantages to … housing this program in a nonprofit,” Graham Wilson, a lawyer working with Chorus, said to creators on a Zoom call reviewed by WIRED. “It gives us the ability to raise money from donors. It also, with this structure, it avoids a lot of the public disclosure or public disclaimers—you know, ‘Paid for by blah blah blah blah’—that you see on political ads. We don’t need to deal with any of that. Your names aren’t showing up on, like, reports filed with the FEC.” (Wilson did not reply to a request for comment.)

Keep reading

Should the Government Restrict ‘Harmful’ Speech Online?

The First Amendment prohibits the federal government from suppressing speech, including speech it deems “harmful,” yet lawmakers keep trying to regulate online discourse.

Over the summer, the Senate passed the Kids Online Safety Act (KOSA), a bill to allegedly protect children from the adverse effects of social media. Senate Majority Leader Chuck Schumer took procedural steps to end the debate and quickly advance the bill to a floor vote. According to Schumer, the situation was urgent. In his remarks, he focused on the stories of children who were targets of bullying and predatory conduct on social media. To address these safety issues, the proposed legislation would place liability on online platforms, requiring them to take “reasonable” measures to prevent and mitigate harm.

It’s now up to the House to push the bill forward to the President’s desk. After initial concerns about censorship, the House Committee on Energy and Commerce advanced the bill in September, paving the way for a final floor vote.

KOSA highlights an ongoing tension between free speech and current efforts to make social media “safer.” In its persistent attempts to remedy social harm, the government shrinks what is permissible to say online and assumes a role that the First Amendment specifically guards against.

At its core, the First Amendment is designed to protect freedom of speech from government intrusion. Congress is not responsible for determining what speech is permissible or what information the public has the right to access. Courts have long held that all speech is protected unless it falls within certain categories. Prohibitions against harmful speech—where “harmful” is determined solely by lawmakers—are not consistent with the First Amendment.

But bills like KOSA add layers of complexity. First, the government is not simply punishing ideological opponents or those with unfavorable viewpoints, which would clearly violate the First Amendment. When viewed in its best light, KOSA is equally about protecting children and their health. New York had similar public health and safety justifications for its controversial hate speech law, which was blocked by a district court and is pending appeal. Under this argument, which is often cited to rationalize speech limitations, the dangers to society are so great that the government should take action to protect vulnerable groups from harm. However, the courts have generally ruled that this is not sufficient justification to limit protected speech.

In American Booksellers Association v. Hudnut (1986), Judge Frank Easterbrook evaluated the constitutionality of a pornography prohibition enacted by the City of Indianapolis. The city reasoned that pornography has a detrimental impact on society because it influences attitudes and leads to discrimination and violence against women. As Judge Easterbrook wrote in his now-famous opinion, just because speech has a role in social conditioning or contributes loosely to social harm does not give the government license to control it. Such content is still protected, however harmful or insidious, and any answer to the contrary would allow the government to become the “great censor and director of which thoughts are good for us.”

In addition to the protecting children argument, a second layer of complexity is that KOSA enables censorship through roundabout means. The government accomplishes what it is barred from doing under the First Amendment by requiring online platforms to police a vast array of harms or risk legal consequences. This is a common feature of recent social media bills, which place the responsibility on platforms.

Practically, the result is inevitably less speech. Under KOSA, the platform has a “duty of care” to mitigate youth anxiety, depression, eating disorders, and addiction-like behaviors. While this provision focuses on the covered entity’s design and operation, it necessarily implicates speech since social media platforms are built around user-generated posts, from content curation to notifications. Because platforms are liable for falling short of the “duty of care,” this requirement is bound to sweep up millions of posts that are protected speech, even ordinary content that may trigger the enumerated harm. While the platform would technically be the entity implementing these policies, the government would be driving content removal.

Keep reading

First Amendment Claim Rejected in Case Over Tennessee Lawmaker’s Facebook Page

Tennessee lawmaker Jeremy Faison, a prominent Republican figure and chairman of the state House GOP Caucus, has emerged at the center of a court case with broader implications for how public officials use social media and what rights their constituents have when they engage online.

Faison operates a Facebook page that blends political messaging with personal content.

That blend became a legal flashpoint when a man named Fox, who doesn’t live in Faison’s district, posted comments that were subsequently deleted.

After being blocked from the page, Fox filed a lawsuit, arguing that Faison had violated his constitutional rights.

The court didn’t see it that way.

Citing the Supreme Court’s recent decision in Lindke v. Freed, the judge determined that Faison was not acting in an official state capacity even when using the page to perform functions associated with his role as a legislator.

We obtained the opinion for you here.

Because individual lawmakers cannot, by law, speak on behalf of the state, the court said Faison’s moderation of comments wasn’t subject to First Amendment constraints.

Keep reading

‘World-first’ eSafety commissioner Julie Inman Grant exposed: How taxpayer’s money is being wasted on an endless game of online whack-a-mole – as it’s revealed banned X posts can simply be re-uploaded

The Australian government’s attempts to police online speech have descended into farce after its ‘world’s first’ eSafety Commissioner admitted it was effectively powerless to stop people re-sharing ‘banned’ posts.

On Tuesday, Daily Mail Australia revealed the Australian government tried to force a Canadian man called Chris Elston to delete an ‘offensive’ post about a UN trans expert, threatening X owner Elon Musk with an $800,000 fine if it was not removed from the platform.

Mr Elston, who goes by the name of ‘Billboard Chris’ on X and lives in Canada, refused to delete the post.

When X subsequently complied with the ‘removal order’ by geo-blocking the post in Australia, Mr Elston simply re-shared the offending post. 

In a colossal back-fire for the e-Safety Commissioner, that post alone has been seen over 130,000 times and a concerted campaign to re-share it by others has racked up hundreds of thousands of views.

In response to this publication’s story, Billionaire X owner Musk said: ‘What is the world coming to?’ 

Now, the taxpayer-funded eSafety Commissioner has admitted it can only block or remove the subsequent posts if other complaints are made by the offended party.

‘eSafety’s Adult Cyber Abuse Scheme is a complaints-based scheme,’ a spokesperson for the eSafety Commissioner told Daily Mail Australia. 

‘In cases where a new version of the material has been posted after a removal notice has been issued and complied with, we require a new complaint from the targeted Australian resident – or someone authorised to report on their behalf – to take regulatory action.’

This effectively means the ‘world’s first’ online safety regulator could be engaged in an endless game of whack-a-mole as it attempts to police speech online. 

Political figures were lining up to condemn the alleged waste of Aussie taxpayer’s money. 

Keep reading

YouTube Faces Backlash for Quietly Using AI to Alter Shorts Without Creator Consent

Some YouTube creators are calling foul after learning the platform has been quietly using AI to modify their Shorts without notification or approval.

For months, users have noticed odd visual quirks in some Shorts, such as skin that looks overly airbrushed, clothes appearing sharper than expected, or facial features slightly distorted.

The alterations are subtle enough to go unnoticed in isolation, but side-by-side comparisons have revealed inconsistencies that many say make their videos feel unnatural or artificial.

Musician and creator Rhett Shull spotlighted the issue in a video that has drawn over 700,000 views. Comparing his uploads across platforms, he pointed out that YouTube had seemingly softened and retouched his Shorts without permission.

“I did not consent to this,” said Shull. “Replacing or enhancing my work with some AI upscaling system not only erodes trust with the audience, but it also erodes my trust in YouTube.”

Keep reading