Media Outlets Are Already Calling for Online 2024 Election Censorship

The page has only just been turned on 2023 and already the narrative that much policing of online speech will be vital for 2024, an election year, has already stirred.

The legacy media outlet The Guardian, in its piece about Kate Starbird, has already complained that there may be less censorship ahead of the 2024 elections, and claimed that Rep. Jim Jordan’s committee’s reports on Big Tech-government censorship collusion are based on “outlandish claims.” This is ignoring the fact that an injunction was successfully placed on the Biden administration for its censorship pressure on Big Tech, a case that will be ruled on by The Supreme Court this year.

In an era where the policing of online speech is increasingly contentious, Kate Starbird’s role in combating what she terms election misinformation has placed her squarely in the midst of a heated debate. As a leading figure at the University of Washington’s Center for an Informed Public, Starbird has actively engaged in documenting what she and her team perceive as misinformation during the 2020 elections, particularly focusing on claims of voter fraud.

However, Starbird’s approach and her team’s actions have not been without controversy. Critics argue that their efforts amount to a form of censorship, infringing upon free speech. This criticism extends beyond Starbird’s team to a broader national trend, where researchers engaged in similar work face accusations of partisanship and censorship, challenging the principles of free expression.

Jim Jordan, chair of the House judiciary committee, has emerged as a key figure in opposing what he views as the overreach of these researchers. He has focused on investigating groups and individuals involved in counteracting misinformation, especially in the context of elections and Covid-19. Central to the controversy is the practice of working with government entities and flagging content to social media platforms, which some argue leads to undue censorship and violates First Amendment rights.

The debate over the role of anti-misinformation efforts has escalated beyond Congress, evidenced by lawsuits from the attorneys general of Missouri and Louisiana and from the state of Texas, along with two rightwing media companies. These legal actions challenge the alleged collaboration between the Biden administration, the Global Engagement Center, and social media companies, showing it as a constitutional breach.

Keep reading

Biden Complains About Provision That Bans Pentagon From Contracting With Censorship Groups, “Fact-Checkers”

There are few things as jarring as a sitting US administration evoking the First Amendment (constitutional free speech protections) – while the purpose to all intents and purposes seems to be to actually undermine them.

In such cases, the hypocrisy doesn’t simply whisper. Here, it screams. And there have been many such instances over the years.

This is a new example: the Biden administration late last week approved the National Defense Authorization Act (NDAA) for the upcoming year.

One – for an “authoritative democracy,” provisions was that the US Defense Department would not be allowed to contractually work with certain groups, such as by now-infamous NewsGuard, and the free-speech-trampling Global Disinformation Index (GDI) – effectively out there working hard to silence opposition-leaning press in the US.

But then, as soon as the 2024 NDAA was signed by Biden late last week, the somewhat erratic president – or whoever is… advising him – pushed a different story to the public.

“While I am pleased to support the critical objectives of the NDAA, I note that certain provisions of the Act raise concerns,” reads a subsequent statement, signed by Biden.

Keep reading

Google’s New Patent: Using Machine Learning to Identify “Misinformation” on Social Media

Google has filed an application with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media.

Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward.

The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there.

Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.”

But it seems that Google is developing the tool with other platforms in mind.

The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.”

Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at.

(Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.)

The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data.

Keep reading

Biden Administration Urges Supreme Court To Overturn Injunction on Federal Agencies Influencing Tech Censorship

The US Court of Appeals for the Fifth Circuit recently affirmed an injunction against federal agencies to stop the current White House from colluding with Big Tech’s social media.

And now, the Biden Administration is going to the US Supreme Court in a last-ditch attempt to reverse this decision.

The big picture effect – or at least, the intended meaning – of the Fifth Circuit ruling was to stop the government from working with Big Tech in censoring online content.

There’s little surprise that this doesn’t sit well with that government, which now hopes that the federal appellate court’s decision can be overturned.

The White House says the ruling is banning its “good” work done alongside social media to combat “misinformation”; instead of admitting its actions to amount to collusion with Big Tech – which has been amply documented now, not least by the Twitter Files – the government insists its actions are serving the public, and its “ability” to discuss relevant issues.

We obtained a copy of the petition for you here.

US Surgeon General Vivek Murthy is back again here – to say that what those now in power in the US (a message amplified by legacy media) did ahead of the 2020 presidential election, as well as subsequently regarding the pandemic “misinformation” – which is now fairly widely accepted to be censorship (“moderation”) – is what Murthy still calls, justified.

Keep reading

Substackers Battle Over Banning Nazis

Once again, we’re debating about “platforming Nazis,” following the publication of an article in The Atlantic titled “Substack Has a Nazi Problem” and a campaign by some Substack writers to see some offensive accounts given the boot. And once again, the side calling for more content suppression is short-sighted and wrong. 

This is far from the first time we’ve been here. It seems every big social media platform has been pressured to ban bigoted or otherwise offensive accounts. And Substack—everyone’s favorite platform for pretending like it’s 2005 and we’re all bloggers again—has already come under fire multiple times for its moderation policies (or lack thereof).

Substack differs from blogging systems of yore in some key ways: It’s set up primarily for emailed content (largely newsletters but also podcasts and videos), it has paid some writers directly at times, and it provides an easy way for any creator to monetize content by soliciting fees directly from their audience rather than running ads. But it’s similar to predecessors like WordPress and Blogger in some key ways, also—and more similar to such platforms than to social media sites such as Instagram or X (formerly Twitter). For instance, unlike on algorithm-driven social media platforms, Substack readers opt into receiving posts from specific creators, are guaranteed to get emailed those posts, and will not receive random content to which they didn’t subscribe.

Substack is also similar to old-school blogging platforms in that it’s less heavy-handed with moderation. On the likes of Facebook, X, and other social media platforms, there are tons of rules about what kinds of things you are and aren’t allowed to post and elaborate systems for reporting and moderating possibly verboten content. 

Substack has some rules, but they’re pretty broad—nothing illegal, no inciting violence, no plagiarism, no spam, and no porn (nonpornographic nudity is OK, however).

Substack’s somewhat more laissez faire attitude toward moderation irks people who think every tech company should be in the business of deciding which viewpoints are worth hearing, which businesses should exist, and which groups should be allowed to speak online. To this censorial crew, tech companies shouldn’t be neutral providers of services like web hosting, newsletter management, or payment processing. Rather, they must evaluate the moral worth of every single customer or user and deny services to those found lacking.

Keep reading

Meta: Systemic Censorship of Palestine Content

Meta’s content moderation policies and systems have increasingly silenced voices in support of Palestine on Instagram and Facebook in the wake of the hostilities between Israeli forces and Palestinian armed groups, Human Rights Watch said in a report released today. The 51-page report, “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook,” documents a pattern of undue removal and suppression of protected speech including peaceful expression in support of Palestine and public debate about Palestinian human rights. Human Rights Watch found that the problem stems from flawed Meta policies and their inconsistent and erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals.

“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, acting associate technology and human rights director at Human Rights Watch. “Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”

Keep reading

Google Experiments With “Faster and More Adaptable” Censorship of “Harmful” Content Ahead of 2024 US Elections

In the run-up to the 2020 US presidential election, Big Tech engaged in unprecedented levels of election censorship, most notably by censoring the New York Post’s bombshell Hunter Biden laptop story just a few weeks before voters went to the polls.

And with the 2024 US presidential election less than a year away, both Google and its video sharing platform, YouTube, have confirmed that they plan to censor content they deem to be “harmful” in the run-up to the election.

In its announcement, Google noted that it already censors content that it deems to be “manipulated media” or “hate and harassment” — two broad, subjective terms that have been used by tech giants to justify mass censorship.

However, ahead of 2024, the tech giant has started using large language models (LLMs) to experiment with “building faster and more adaptable” censorship systems that will allow it to “take action even more quickly when new threats emerge.”

Google will also be censoring election-related responses in Bard (its generative AI chatbot) and Search Generative Experience (its generative AI search results).

In addition to these censorship measures, Google will be continuing its long-standing practice of artificially boosting content that it deems to be “authoritative” in Google Search and Google News. While this tactic doesn’t result in the removal of content, it can result in disfavored narratives being suppressed and drowned out by these so-called authoritative sources, which are mostly pre-selected legacy media outlets.

Keep reading

Washington Post Op-Ed Argues That Colleges Should ‘Restrict’ Speech To Fight Antisemitism

Since the start of the Israel-Hamas war, college campuses around the country have been embroiled in intense anti-Israel protests. Elite college campuses have seen particularly aggressive demonstrations that have frequently included outright support for Hamas.

On December 5th, the college presidents of Harvard, the University of Pennsylvania, and the Massachusetts Institute of Technology (MIT) appeared at a Congressional hearing, where they were grilled on their schools’ response to allegations of campus anti-Semitism. During the hearing, Rep. Elise Stefanik (R-NY), asked all three if “calling for the genocide of Jews” would violate their school’s policies. 

“It is a context-dependent situation,” University of Pennsylvania President Liz Magill responded. “If the speech becomes conduct, it can be harassment,”

Outrage over Magill’s answer—both from those who wished to see her commit to banning legal but offensive anti-Semitic speech and from those who pointed out Penn’s consistent record of punishing professors for much less offensive expression—culminated in her resignation on Saturday.

While First Amendment advocates have expressed hope that these recent controversies would show just how easily abused anti “hate speech” rules on college campuses are, many administrators seem to be taking the opposite position, advocating for more censorship, not less.

On Sunday, Claire O. Finkelstein, who is a member of Penn’s Open Expression Committee and chairs the law school’s committee on academic freedom, took to the pages of The Washington Post in an article titled “To fight antisemitism on campuses, we must restrict speech.”

In it, Finkelstein farcically argued that “the value of free speech has been elevated to a near-sacred level on university campuses,” adding that, “as a result, universities have had to tolerate hate speech.”

The idea that free speech is treated as “near-sacred” on college campuses is beyond absurd. Far from being treated as sacrosanct, free speech and free expression are constantly under fire at American college campuses, elite colleges most of all. 

As the Foundation for Individual Rights and Expression (FIRE) CEO Greg Lukianoff points out, over the past decade, “we know of more than 1,000 campaigns to get professors punished for their free speech or academic freedom. Of those, about two-thirds succeeded in getting the professor punished.” 

The most disturbing detail? Lukianoff says that almost 200 of these professors were fired, “nearly twice the number estimated for the Red Scare.”

Keep reading

China’s online censors target short videos, artificial intelligence and ‘pessimism’ in latest crackdown

China’s internet censors are targeting short videos that spread “misleading content” as part of its latest online crackdown.

The Cyberspace Administration of China said on Tuesday that it would target short videos that spread rumours about people’s lives or promoted incorrect values such as pessimism – included for the first time – and extremism.

The campaign would also target fake videos generated using artificial intelligence, the watchdog said.

The country’s top censorship body has been running an annual online crackdown known as “Qing Lang”, which means clear and bright, since 2020.

It said this year’s crackdown would benefit people’s mental health and create a healthy space for competition that would help the short video industry develop.

The country’s best known short video platform is Douyin – the Chinese sibling of TikTok – but content is shared on a number of other Chinese social media platforms, including major players such as WeChat and Weibo.

The watchdog said one of the targets of the latest campaign would be content producers who make up stories about social minorities to win public sympathy. It would also crack down on people staging incidents, “making up fake plots and spreading panic”.

Keep reading

WEF Likens “Misinformation” To A Cybersecurity Issue In Calls For More Action

According to a recent study by the World Economic Forum (WEF) and allied organizations, cybersecurity concerns are taking on new dimensions. Misinformation and disinformation disseminated via the internet are now being framed as key challenges in ensuring “cybersecurity.” The troubling report was launched on December 5 and designated as “Cybersecurity Futures 2030: New Foundations.”

The study postulates the future of cybersecurity lies rather in safeguarding the integrity and source of data. This introduces a novel perspective on the significance of locating and quashing fabricated information, cynically tagged as “mis”- or “dis-information” held in the cybersecurity domain.

Various international conferences, both virtual and geo-located, were instrumental in shaping the insights of the study. Sessions held across the world, in conjunction with an online gathering inviting participants across Europe, were supposedly catalysts in outlining the futuristic, hypothetical scenarios catapulting cybersecurity to 2030.

The WEF report pushes digital security “literacy training” as quintessential to warding off the threats posed by misinformation and disinformation, referring to them as the “core of cyber concerns.” This is similar to controversial proposals for “media literacy” that are taking place across some governments, most recently California.

Keep reading