Substackers Battle Over Banning Nazis

Once again, we’re debating about “platforming Nazis,” following the publication of an article in The Atlantic titled “Substack Has a Nazi Problem” and a campaign by some Substack writers to see some offensive accounts given the boot. And once again, the side calling for more content suppression is short-sighted and wrong. 

This is far from the first time we’ve been here. It seems every big social media platform has been pressured to ban bigoted or otherwise offensive accounts. And Substack—everyone’s favorite platform for pretending like it’s 2005 and we’re all bloggers again—has already come under fire multiple times for its moderation policies (or lack thereof).

Substack differs from blogging systems of yore in some key ways: It’s set up primarily for emailed content (largely newsletters but also podcasts and videos), it has paid some writers directly at times, and it provides an easy way for any creator to monetize content by soliciting fees directly from their audience rather than running ads. But it’s similar to predecessors like WordPress and Blogger in some key ways, also—and more similar to such platforms than to social media sites such as Instagram or X (formerly Twitter). For instance, unlike on algorithm-driven social media platforms, Substack readers opt into receiving posts from specific creators, are guaranteed to get emailed those posts, and will not receive random content to which they didn’t subscribe.

Substack is also similar to old-school blogging platforms in that it’s less heavy-handed with moderation. On the likes of Facebook, X, and other social media platforms, there are tons of rules about what kinds of things you are and aren’t allowed to post and elaborate systems for reporting and moderating possibly verboten content. 

Substack has some rules, but they’re pretty broad—nothing illegal, no inciting violence, no plagiarism, no spam, and no porn (nonpornographic nudity is OK, however).

Substack’s somewhat more laissez faire attitude toward moderation irks people who think every tech company should be in the business of deciding which viewpoints are worth hearing, which businesses should exist, and which groups should be allowed to speak online. To this censorial crew, tech companies shouldn’t be neutral providers of services like web hosting, newsletter management, or payment processing. Rather, they must evaluate the moral worth of every single customer or user and deny services to those found lacking.

Keep reading

Meta: Systemic Censorship of Palestine Content

Meta’s content moderation policies and systems have increasingly silenced voices in support of Palestine on Instagram and Facebook in the wake of the hostilities between Israeli forces and Palestinian armed groups, Human Rights Watch said in a report released today. The 51-page report, “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook,” documents a pattern of undue removal and suppression of protected speech including peaceful expression in support of Palestine and public debate about Palestinian human rights. Human Rights Watch found that the problem stems from flawed Meta policies and their inconsistent and erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals.

“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, acting associate technology and human rights director at Human Rights Watch. “Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”

Keep reading

Google Experiments With “Faster and More Adaptable” Censorship of “Harmful” Content Ahead of 2024 US Elections

In the run-up to the 2020 US presidential election, Big Tech engaged in unprecedented levels of election censorship, most notably by censoring the New York Post’s bombshell Hunter Biden laptop story just a few weeks before voters went to the polls.

And with the 2024 US presidential election less than a year away, both Google and its video sharing platform, YouTube, have confirmed that they plan to censor content they deem to be “harmful” in the run-up to the election.

In its announcement, Google noted that it already censors content that it deems to be “manipulated media” or “hate and harassment” — two broad, subjective terms that have been used by tech giants to justify mass censorship.

However, ahead of 2024, the tech giant has started using large language models (LLMs) to experiment with “building faster and more adaptable” censorship systems that will allow it to “take action even more quickly when new threats emerge.”

Google will also be censoring election-related responses in Bard (its generative AI chatbot) and Search Generative Experience (its generative AI search results).

In addition to these censorship measures, Google will be continuing its long-standing practice of artificially boosting content that it deems to be “authoritative” in Google Search and Google News. While this tactic doesn’t result in the removal of content, it can result in disfavored narratives being suppressed and drowned out by these so-called authoritative sources, which are mostly pre-selected legacy media outlets.

Keep reading

Democrats Demand Social Media Platforms Censor Abortion “Misinformation” With Direct Letters to Musk and Zuckerberg

House Democrats have issued a strong call to Elon Musk and Mark Zuckerberg, urging them to address what they call the widespread issue of abortion “misinformation” on their social media platforms, including Facebook, Instagram, and X.

These concerns were expressed in two letters from the House Oversight Committee.

We obtained a copy of the letter for you here.

The letters, spearheaded by Jamie Raskin (D-Md.), the committee’s Ranking Member, request that Musk’s and Zuckerberg’s platforms urgently tackle the spread of false abortion information and provide Congress with briefings on this matter by December 14.

“Your company’s decision to keep these posts visible seems at odds with your Terms of Service that allow you to remove unlawful conduct on your platform,” the letter states. “Even more concerning is your company’s apparent double-standard when it comes to removing posts that you label ‘abortion advocacy’ or posts that offer legitimate medical and logistical advice for someone considering abortion, while allowing crisis pregnancy centers and anti-abortion advocates to spread false and misleading information regarding abortion.”

The committee, in its letters, highlights the nature of the allegedly misleading medical information and false content about abortion that is proliferating, especially on the platforms managed by Musk and Zuckerberg. This alleged misinformation, according to the committee, can lead to people doubting their healthcare providers and even their own judgment, posing significant health and safety risks.

Keep reading

Democrats Blasted for Claiming “No Evidence” of Big Tech-Government Censorship Collusion

Republicans called out Democrats for continuing to deny that the Biden administration colluded with tech platforms to censor speech during a hearing today, despite lawsuitssubpoenas, and other releases uncovering huge troves of evidence documenting the Biden administration’s relentless censorship demands.

Democrats claimed there’s “no evidence” of censorship collusion, branded the notion that social media companies are colluding with the government to censor conservative voices as “unfounded,” and called it a “conspiracy” theory during a House Judiciary Committee Hearing on the Weaponization of the Federal Government.

But Republicans shot back and called them out for ignoring the huge banks of evidence that showcase Biden admin officials leaning on Big Tech to censor speech they disapprove of.

Three of the witnesses, journalist Matt Taibbi, journalist Michael Shellenberger, and journalist Rupa Subramanya, also challenged Democrat attempts to dismiss evidence of Biden admin-Big Tech censorship collusion during the hearing.

Rep. Stacey Plaskett (D-VI) claimed there’s “no evidence” of tech companies colluding with the government to censor conservatives.

Keep reading

Instagram Videos Sexualizing Children Shown To Adults Who Follow Preteen Influencers

In June, we noted that Meta’s Instagram was caught facilitating a massive pedophile network, by which the service would promote pedo-centric content to other pedophiles using coded emojis, such as a slice of cheese pizza.

According to the the Wall Street JournalInstagram allowed pedophiles to search for content with explicit hashtags such as #pedowhore and #preteensex, which were then used to connect them to accounts that advertise child-sex material for sale from users going under names such as “little slut for you.” And according to the National Center for Missing & Exploited Children, Meta accounted for more than 85% of child pornography reports, the Journal reported.

Companies whose ads appeared next to inappropriate content included Disney, Walmart, Match.com, HIms and the Wall Street Journal itself.

And yet, no mass exodus of advertisers…

They haven’t stopped…

According to a new report by the Journal, Instagram’s ‘reels’ service – which shows users short video clips of things which the company’s algorithm thinks users would find of interest – has been serving up clips of sexualized children to adults that follow preteen influencers, gymnasts, cheerleaders and other categories that child predators are attracted to.

The Journal set up the test accounts after observing that the thousands of followers of such young people’s accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults. The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.

In a stream of videos recommended by Instagram, an ad for the dating app Bumble appeared between a video of someone stroking the face of a life-size latex doll and a video of a young girl with a digitally obscured face lifting up her shirt to expose her midriff. In another, a Pizza Hut commercial followed a video of a man lying on a bed with his arm around what the caption said was a 10-year-old girl. -WSJ

A separate experiment run by the Canadian Centre for Child Protection had similar results after running similar tests. 

Keep reading

Disinformation expert laments loss of power over speech on social media leading up to 2024 elections

Adisinformation expert is lamenting that social media platforms have less control over speech as the 2024 elections approach, while conservatives notch wins against the industry and the Department of Homeland Security’s calls for greater censorship.

This comes as the House Select Subcommittee on the Weaponization of the Federal Government released yet more evidence this week on nexus of the federal government, universities, and Big Tech that worked to censor Americans during the 2020 election. The House Judiciary Committee also held a hearing on the Department of Homeland Security’s efforts to increase censorship through the Election Integrity Partnership.

The new report by the Foundation for Freedom Online (FFO) shows how former Big Tech employees and censorship experts are lamenting their shrinking influence as the presidential election approaches next year. Pressure from a Republican House and some journalists discourages the federal government from collaborating with the Big Tech companies as it did during the 2020 election. Some federal courts have weighed in, finding the collaboration unconstitutional.

“Yoel Roth has been on a public speaking tear, sounding alarms to fellow censorship industry insiders that they’ve lost the control over 2024 election speech they once had in 2020,” Mike Benz, executive director of FFO posted to X on Monday.

Keep reading

Microsoft and Meta Detail Plans To Combat “Election Disinformation” Which Includes Meme Stamp-Style Watermarks and Reliance on “Fact Checkers”

And so it begins. In fact, it hardly ever stops – another election cycle in well on its way in the US. But what has emerged these last few years, and what continues to crop up the closer the election day gets, is the role of the most influential social platforms/tech companies.

Pressure on them is sometimes public, but mostly not, as the Twitter Files have taught us; and it is with this in mind that various announcements about combating “election disinformation” coming from Big Tech should be viewed.

Although, one can never discount the possibility that some – say, Microsoft – are doing it quite voluntarily. That company has now come out with what it calls “new steps to protect elections,” and is framing this concern for election integrity more broadly than just the goings-on in the US.

From the EU to India and many, many places in between, elections will be held over the next year or so, says Microsoft, however, these democratic processes are at peril.

“While voters exercise this right, another force is also at work to influence and possibly interfere with the outcomes of these consequential contests,” said a blog post co-authored by Microsoft Vice Chair and President Brad Smith.

By “another force,” could Smith possibly mean, Big Tech? No. It’s “multiple authoritarian nation states” he’s talking about, and Microsoft’s “Election Protection Commitments” seek to counter that threat in a 5-step plan to be deployed in the US, and elsewhere where “critical” elections are to be held.

Critical more than others why, and what is Microsoft seeking to protect – it’s all very unclear.

Keep reading

Debunking the Myth of “Anonymous” Data

Today, almost everything about our lives is digitally recorded and stored somewhere. Each credit card purchase, personal medical diagnosis, and preference about music and books is recorded and then used to predict what we like and dislike, and—ultimately—who we are.

This often happens without our knowledge or consent. Personal information that corporations collect from our online behaviors sells for astonishing profits and incentivizes online actors to collect as much as possible. Every mouse click and screen swipe can be tracked and then sold to ad-tech companies and the data brokers that service them.

In an attempt to justify this pervasive surveillance ecosystem, corporations often claim to de-identify our data. This supposedly removes all personal information (such as a person’s name) from the data point (such as the fact that an unnamed person bought a particular medicine at a particular time and place). Personal data can also be aggregated, whereby data about multiple people is combined with the intention of removing personal identifying information and thereby protecting user privacy.

Sometimes companies say our personal data is “anonymized,” implying a one-way ratched where it can never be dis-aggregated and re-identified. But this is not possible—anonymous data rarely stays this way. As Professor Matt Blaze, an expert in the field of cryptography and data privacy, succinctly summarized: “something that seems anonymous, more often than not, is not anonymous, even if it’s designed with the best intentions.”

Keep reading

A cautionary tale about Wikipedia censorship and the Twitter Files

For the illiberal left, it’s not enough that you submit to their cultural revolution. You must also underwrite it.

This happens not only at the state level, with issues such as abortion and public-school curricula, but at the private level as well. A good recent example includes efforts by certain Wikipedia editors to censor mentions of a journalism award handed out recently to the journalist behind the so-called Twitter Files.

Wikipedia: glad-handing for donations on the front end, while certain “master editors” censor factual events on the back end!

On November 1, journalist Matt Taibbi received a journalism award for his efforts to uncover the incestuous relationship between Big Tech and censorious federal apparatchiks. More specifically, for his part in casting a light on the government’s clandestine coordination with Twitter to censor inconvenient speech, the National Journalism Center, where I serve as program director, and the Dao Feng and Angela Foundation awarded Taibbi and his colleagues — former New York Times op-ed staff editor Bari Weiss and author Michael Shellenberger —a shared prize of $100,000 for excellence in investigative journalism.

In accepting the honor, Taibbi himself reiterated the purpose of the honor: to recognize journalism that challenges power rather than protect it.

Keep reading