Democrat Policies Are so Unpopular That a Dark Money Group is Paying Progressive Influencers $8K a Month to Push Them

A dark money group on the left is paying progressive influencers up to $8,000 a month to push Democrat policy ideas. Must be nice.

This is so typical. For all of the Democrats’ talk of getting ‘big money’ out of politics, they are the worst culprits of using big money to advance their agenda when it suits them.

The New York Post reports:

‘Dark money’ group paying pro-Dem influencers up to $8K a month: report

A secretive dark money group tied to the Democratic Party is paying online influencers up to $8,000 a month to disseminate left-leaning talking points, according to a report.

The “Chorus Creator Incubator Program” is said to be funded by the “Sixteen Thirty Fund,” a nonprofit sometimes portrayed as the left’s answer to the Koch network and which has funneled money to dozens of Democratic-friendly influencers, according to WIRED magazine.

The names attached to the program span some of the most recognizable liberal voices online.

They include Olivia Julianna, the Gen Z activist who spoke at the 2024 Democratic National Convention; Loren Piretra, a former Playboy executive turned Occupy Democrats YouTuber; and Barrett Adair, the content creator who runs a viral American Girl Doll–themed meme account.

The program also includes Suzanne Lambert, who styles herself a “Regina George liberal”; Arielle Fodor, a teacher with 1.4 million TikTok followers; Sander Jennings, the TLC reality star and older brother of trans influencer Jazz Jennings; and David Pakman, host of a YouTube show.

More from the story in Wired:

“There are some real great advantages to … housing this program in a nonprofit,” Graham Wilson, a lawyer working with Chorus, said to creators on a Zoom call reviewed by WIRED. “It gives us the ability to raise money from donors. It also, with this structure, it avoids a lot of the public disclosure or public disclaimers—you know, ‘Paid for by blah blah blah blah’—that you see on political ads. We don’t need to deal with any of that. Your names aren’t showing up on, like, reports filed with the FEC.” (Wilson did not reply to a request for comment.)

Keep reading

Should the Government Restrict ‘Harmful’ Speech Online?

The First Amendment prohibits the federal government from suppressing speech, including speech it deems “harmful,” yet lawmakers keep trying to regulate online discourse.

Over the summer, the Senate passed the Kids Online Safety Act (KOSA), a bill to allegedly protect children from the adverse effects of social media. Senate Majority Leader Chuck Schumer took procedural steps to end the debate and quickly advance the bill to a floor vote. According to Schumer, the situation was urgent. In his remarks, he focused on the stories of children who were targets of bullying and predatory conduct on social media. To address these safety issues, the proposed legislation would place liability on online platforms, requiring them to take “reasonable” measures to prevent and mitigate harm.

It’s now up to the House to push the bill forward to the President’s desk. After initial concerns about censorship, the House Committee on Energy and Commerce advanced the bill in September, paving the way for a final floor vote.

KOSA highlights an ongoing tension between free speech and current efforts to make social media “safer.” In its persistent attempts to remedy social harm, the government shrinks what is permissible to say online and assumes a role that the First Amendment specifically guards against.

At its core, the First Amendment is designed to protect freedom of speech from government intrusion. Congress is not responsible for determining what speech is permissible or what information the public has the right to access. Courts have long held that all speech is protected unless it falls within certain categories. Prohibitions against harmful speech—where “harmful” is determined solely by lawmakers—are not consistent with the First Amendment.

But bills like KOSA add layers of complexity. First, the government is not simply punishing ideological opponents or those with unfavorable viewpoints, which would clearly violate the First Amendment. When viewed in its best light, KOSA is equally about protecting children and their health. New York had similar public health and safety justifications for its controversial hate speech law, which was blocked by a district court and is pending appeal. Under this argument, which is often cited to rationalize speech limitations, the dangers to society are so great that the government should take action to protect vulnerable groups from harm. However, the courts have generally ruled that this is not sufficient justification to limit protected speech.

In American Booksellers Association v. Hudnut (1986), Judge Frank Easterbrook evaluated the constitutionality of a pornography prohibition enacted by the City of Indianapolis. The city reasoned that pornography has a detrimental impact on society because it influences attitudes and leads to discrimination and violence against women. As Judge Easterbrook wrote in his now-famous opinion, just because speech has a role in social conditioning or contributes loosely to social harm does not give the government license to control it. Such content is still protected, however harmful or insidious, and any answer to the contrary would allow the government to become the “great censor and director of which thoughts are good for us.”

In addition to the protecting children argument, a second layer of complexity is that KOSA enables censorship through roundabout means. The government accomplishes what it is barred from doing under the First Amendment by requiring online platforms to police a vast array of harms or risk legal consequences. This is a common feature of recent social media bills, which place the responsibility on platforms.

Practically, the result is inevitably less speech. Under KOSA, the platform has a “duty of care” to mitigate youth anxiety, depression, eating disorders, and addiction-like behaviors. While this provision focuses on the covered entity’s design and operation, it necessarily implicates speech since social media platforms are built around user-generated posts, from content curation to notifications. Because platforms are liable for falling short of the “duty of care,” this requirement is bound to sweep up millions of posts that are protected speech, even ordinary content that may trigger the enumerated harm. While the platform would technically be the entity implementing these policies, the government would be driving content removal.

Keep reading

First Amendment Claim Rejected in Case Over Tennessee Lawmaker’s Facebook Page

Tennessee lawmaker Jeremy Faison, a prominent Republican figure and chairman of the state House GOP Caucus, has emerged at the center of a court case with broader implications for how public officials use social media and what rights their constituents have when they engage online.

Faison operates a Facebook page that blends political messaging with personal content.

That blend became a legal flashpoint when a man named Fox, who doesn’t live in Faison’s district, posted comments that were subsequently deleted.

After being blocked from the page, Fox filed a lawsuit, arguing that Faison had violated his constitutional rights.

The court didn’t see it that way.

Citing the Supreme Court’s recent decision in Lindke v. Freed, the judge determined that Faison was not acting in an official state capacity even when using the page to perform functions associated with his role as a legislator.

We obtained the opinion for you here.

Because individual lawmakers cannot, by law, speak on behalf of the state, the court said Faison’s moderation of comments wasn’t subject to First Amendment constraints.

Keep reading

‘World-first’ eSafety commissioner Julie Inman Grant exposed: How taxpayer’s money is being wasted on an endless game of online whack-a-mole – as it’s revealed banned X posts can simply be re-uploaded

The Australian government’s attempts to police online speech have descended into farce after its ‘world’s first’ eSafety Commissioner admitted it was effectively powerless to stop people re-sharing ‘banned’ posts.

On Tuesday, Daily Mail Australia revealed the Australian government tried to force a Canadian man called Chris Elston to delete an ‘offensive’ post about a UN trans expert, threatening X owner Elon Musk with an $800,000 fine if it was not removed from the platform.

Mr Elston, who goes by the name of ‘Billboard Chris’ on X and lives in Canada, refused to delete the post.

When X subsequently complied with the ‘removal order’ by geo-blocking the post in Australia, Mr Elston simply re-shared the offending post. 

In a colossal back-fire for the e-Safety Commissioner, that post alone has been seen over 130,000 times and a concerted campaign to re-share it by others has racked up hundreds of thousands of views.

In response to this publication’s story, Billionaire X owner Musk said: ‘What is the world coming to?’ 

Now, the taxpayer-funded eSafety Commissioner has admitted it can only block or remove the subsequent posts if other complaints are made by the offended party.

‘eSafety’s Adult Cyber Abuse Scheme is a complaints-based scheme,’ a spokesperson for the eSafety Commissioner told Daily Mail Australia. 

‘In cases where a new version of the material has been posted after a removal notice has been issued and complied with, we require a new complaint from the targeted Australian resident – or someone authorised to report on their behalf – to take regulatory action.’

This effectively means the ‘world’s first’ online safety regulator could be engaged in an endless game of whack-a-mole as it attempts to police speech online. 

Political figures were lining up to condemn the alleged waste of Aussie taxpayer’s money. 

Keep reading

YouTube Faces Backlash for Quietly Using AI to Alter Shorts Without Creator Consent

Some YouTube creators are calling foul after learning the platform has been quietly using AI to modify their Shorts without notification or approval.

For months, users have noticed odd visual quirks in some Shorts, such as skin that looks overly airbrushed, clothes appearing sharper than expected, or facial features slightly distorted.

The alterations are subtle enough to go unnoticed in isolation, but side-by-side comparisons have revealed inconsistencies that many say make their videos feel unnatural or artificial.

Musician and creator Rhett Shull spotlighted the issue in a video that has drawn over 700,000 views. Comparing his uploads across platforms, he pointed out that YouTube had seemingly softened and retouched his Shorts without permission.

“I did not consent to this,” said Shull. “Replacing or enhancing my work with some AI upscaling system not only erodes trust with the audience, but it also erodes my trust in YouTube.”

Keep reading

Thousands Of Grok chats Now Searchable On Google

Hundreds of thousands of conversations that users had with Elon Musk’s xAI chatbot Grok are easily accessible through Google Search, reports Forbes.

Whenever a Grok user clicks the “share” button on a conversation with the chatbot, it creates a unique URL that the user can use to share the conversation via email, text, or on social media. According to Forbes, those URLs are being indexed by search engines like Google, Bing, and DuckDuckGo, which in turn lets anyone look up those conversations on the web. 

Users of Meta‘s and OpenAI‘s chatbots were recently affected by a similar problem, and like those cases, the chats leaked by Grok give us a glimpse into users’ less-than-respectable desires — questions about how to hack crypto wallets; dirty chats with an explicit AI persona; and asking for instructions on cooking meth. 

xAI’s rules prohibit the use of its bot to “promote critically harming human life” or developing “bioweapons, chemical weapons, or weapons of mass destruction,” though that obviously hasn’t stopped users from asking Grok for help with such things anyway.

According to conversations made accessible by Google, Grok gave users instructions on making fentanyl, listed various suicide methods, handed out bomb construction tips, and even provided a detailed plan for the assassination of Elon Musk.

xAI did not immediately respond to a request for comment. We’ve also asked when xAI began indexing Grok conversations.

Late last month, ChatGPT users sounded the alarm that their chats were being indexed on Google, which OpenAI described as a “short-lived experiment.” In a post Musk quote-tweeted with the words “Grok ftw,” Grok explained that it had “no such sharing feature” and “prioritize[s] privacy.”

Keep reading

X Slams Brazil for Censorship, Secret Orders, and Free Speech Crackdown in USTR Trade Investigation

As part of an ongoing investigation by the US Trade Representative (USTR) into Brazil’s treatment of American digital platforms, X has filed a stark warning about what it describes as intensifying threats to freedom of expression and the rule of law in Brazil.

The USTR probe, focused on policies that may harm US companies, closed its comment period on August 18, with a hearing scheduled for September 3.

We obtained a copy of the comments for you here.

X’s submission outlines a series of aggressive measures by Brazilian authorities that the company says are undermining internet freedom and imposing extraterritorial censorship demands.

Among the most concerning developments, according to the platform, is a ruling from Brazil’s Supreme Court in June 2025 that gutted a core protection in the country’s 2014 internet law, the Marco Civil da Internet (MCI).

By declaring Article 19 partially unconstitutional, the ruling opened the door for tech platforms to be held legally responsible for user-generated content, without requiring judicial oversight.

This, X argues, has increased operational burdens and incentivized preemptive content removals.

The platform also warned that Brazil’s judiciary, particularly under Justice Alexandre de Moraes, has been issuing covert content removal orders targeting journalists, politicians, and even US users.

These directives are often enforced without any notice or opportunity to appeal, a practice X says raises serious concerns about due process and transparency.

Further, the company expressed alarm over Brazil’s Superior Court of Justice asserting jurisdiction beyond its borders. According to X, the court has ordered content to be removed globally, even when such content is legal in countries like the United States. The court has described this overreach as a “natural consequence” of the internet, a justification X contends disregards international legal norms.

X also highlighted what it sees as the Brazilian judiciary’s disregard for the US-Brazil Mutual Legal Assistance Treaty (MLAT).

Keep reading

UK Government Moves to Investigate 4chan Under Draconian ‘Online Safety Act’ — Platform’s SAVAGE Response Leaves Them Humiliated

The British government is at it again, weaponizing its so-called “Online Safety Act” to crack down on speech it doesn’t like.

This time, the target is none other than the online forum 4chan, the notorious online discussion board where anonymous users post unfiltered commentary that sends elites into fits.

On June 10, 2025, Ofcom, the UK’s Orwellian Office of Communications, opened an official investigation into 4chan.

According to Ofcom, the platform failed to hand over information on demand, did not file the “appropriate illegal content risk assessments,” and didn’t bow to London’s censorship mandates. In other words, 4chan refused to bend the knee.

By August 13, Ofcom escalated matters, issuing a provisional notice of contravention under the Act and threatening fines of £20,000 ($27,100) plus daily penalties until the platform complied.

According to the notice:

Provisional Decision: Information Notice duties

In accordance with section 130 of the Online Safety Act 2023, we have today issued 4chan Community Support LLC with a provisional notice of contravention.

Ofcom is satisfied that there are reasonable grounds for believing the provider has contravened its duties under section 102(8) of the Act to comply with two requests for information. We will consider any representations provided in response to this provisional notice before we make a final decision on this matter.

The additional duties under investigation

On 10 June 2025, we opened an investigation into whether the provider of 4chan has failed/is failing to comply with its duties under the Online Safety Act 2023 to:

  • adequately respond to a statutory information request;
  • complete and keep a record of a suitable and sufficient illegal content risk assessment; and
  • comply with the safety duties about illegal content.

Ofcom’s investigation continues to examine concurrently whether there are reasonable grounds to believe that the provider has failed, or is failing, to comply with the other duties under investigation, including duties to protect its users from illegal content. We will provide updates on these matters in due course.

But instead of cowering, 4chan and its legal team fired back with a blistering response that left Ofcom and Prime Minister Keir Starmer’s censors utterly humiliated.

In a blistering legal statement posted by Byrne & Storm, P.C. and Coleman Law, P.C., 4chan’s lawyers dismantled Ofcom’s fantasy that they had authority over an American company.

The statement went further, warning that U.S. federal authorities had already been briefed and that the Trump Administration should be prepared to step in to defend American companies against foreign censorship mandates.

The statement reads:

According to press reports, the U.K. Office of Communications (“Ofcom”) has issued a provisional notice under the Online Safety Act alleging a contravention by 4chan and indicating an intention to impose a penalty of £20,000, plus daily penalties thereafter.

4chan is a United States company, incorporated in Delaware, with no establishment, assets, or operations in the United Kingdom. Any attempt to impose or enforce a penalty against 4chan will be resisted in U.S. federal court.

American businesses do not surrender their First Amendment rights because a foreign bureaucrat sends them an e-mail. Under settled principles of U.S. law, American courts will not enforce foreign penal fines or censorship codes.

If necessary, we will seek appropriate relief in U.S. federal court to confirm these principles.

United States federal authorities have been briefed on this matter.

The Prime Minister, Sir Keir Starmer, was reportedly warned by the White House to cease targeting Americans with U.K. censorship codes (according to reporting in the Telegraph on July 30th).

Despite these warnings, Ofcom continues its illegal campaign of harassment against American technology firms. A political solution to this matter is urgently required and that solution must come from the highest levels of American government.

We call on the Trump Administration to invoke all diplomatic and legal levers available to the United States to protect American companies from extraterritorial censorship mandates.

Our client reserves all rights.

Keep reading

Sinaloa Cartel Operator Turned Influencer Gunned Down in Mexico

A former Sinaloa Cartel operator turned social media influencer died Saturday evening after a lone gunman barged into his home and shot him multiple times. The influencer’s murder is believed to be tied to the ongoing turf war between rival factions of the Sinaloa Cartel, which has spilled onto social media and already led to the murder of at least six influencers.

On Saturday evening, at least one gunman barged into the home of Camilo “Pollo Loco” Ochoa in the state of Morelos in Central Mexico. The gunman fired several shots, killing Ochoa instantly, El Sol de Sinaloa reported. By the time authorities responded to the scene, the gunman had fled. Investigators found Ochoa’s bullet-riddled body in his bathroom, wearing the same clothes he had worn earlier in the day during one of his livestreams.

Ochoa received various threats in the past. Earlier this year, Sinaloa Cartel gunmen had posted a photograph of Ochoa along with several other influencers who were supposed to be aligned with the Chapitos faction. At least six influencers on that list have already been killed. The murder comes at a time when the Chapitos faction has been at war for over a year with the Mayiza faction.

Keep reading

Family horror: Relatives get told on social media their mother is dead, by ‘assisted suicide’

A family in Ireland is working through the shock notification, on social media, that their mother is dead – by “assisted suicide,” in a situation about which they knew nothing.

According to the Christian Institute, Maureen Slough, 58, from County Cavan, left, telling family members she was going on holiday to Lithuania.

Instead, she went to the death-dealing Pegasos clinic in Switzerland.

Her family was notified shortly later, via social media platform WhatsApp, that she was dead.

The institute reported, “She had struggled with mental health and previously tried to commit suicide after the death of her two sisters.”

Pegasos officials claimed her daughter, Megan Royal, had confirmed her mother’s intentions, Megan said the “acknowledgment letter” probably was forged.

Writing in the Sunday Independent, columnist David Quinn stated: “It is a sad part of the human condition that nearly all of us will face very significant struggles at various points in our lives. Maureen Slough clearly did, but she had a loving daughter willing to help her,” the institute explained.

Keep reading