Supreme Court Declines To Hear X’s Challenge to FBI Surveillance Gag Orders

The social network formerly known as Twitter has been undergoing more than just “superficial” branding transformations as of late, going from a reliable ally of state-driven censorship, to a platform that became the first major one to try to shed light on the mechanisms and practices of deep censorship.

The Twitter Files disclose more than just a private company exercising the right to be wrong in suppressing users’ free speech: they also implicated the US federal government with damning proof of serious transgressions, such as (explicitly unconstitutional) state collusion in censorship.

However, the US Supreme Court has now refused to consider X’s request to be able to publish some relevant numbers.

The original filing dates all the way back to 2014, in the wake of the revelations by whistleblower Edward Snowden, that sent shock waves both among citizens and politicians.

But those behind the company/platform, now called X, seem well-aware that this story by no means ended with some government concessions (regarding disclosure) made after the Snowden revelations, or with the Twitter Files.

And so, possibly as a defense tactic going forward, X tried to be granted the right to reveal the number of times federal law enforcement “gets in touch” to get information, framed as pertaining to national security.

The Supreme Court decision came after X appealed when a lower instance court said that the FBI had every right to constrain X in sharing the information about the “national security investigations requests” number with the public.

Keep reading

The Covid vaccine gave me side effects that ruined my life, but Facebook keeps censoring me from telling my friends about what happened

A woman who says she suffered chronic health complications after taking the AstraZeneca vaccine claims to have been censored from sharing her story on Facebook.

Caroline Pover, 52, received the jab in March 2021 and within nine hours, experienced convulsions, shivering, breathing difficulties and low blood pressure.

Ms Pover, of Cirencester, Gloucestershire, says she was hospitalised when her condition escalated to ‘stroke-like’ symptoms, in addition to exhaustion, breathing difficulties, a racing heart and migraines.

Her story was shared in a national newspaper in March last year as she and 800 other victims struggled to claim the Government’s Vaccine Damage Payment Scheme (VDPS).

But after sharing the link on her Facebook feed at the start of this year, Ms Pover says the website put a warning notice on her account.

Ms Pover, herself a freelance journalist and entrepreneur, said: ‘My posts about what was happening to me started having FB “notes” appearing underneath them about vaccination.

‘A group page I was an admin on was shut down completely by Facebook in the summer of 2021.

‘When I posted the Daily Express article, which did an excellent job of not discussing anything pro or anti… I received a warning and the post was hidden.

‘It’s a ridiculous situation for vaccine-injured people, who have a right to information.

‘If this was an online support group relating to cancer or another type of serious condition, we’d be outraged at the thought of it being censored and we’d be very sensitive to people having to navigate a very complicated health situation.’

Keep reading

The American Psychological Association Wants (More) Federal Funding To Curb Online “Misinformation”

The American Psychological Association (APA) is among those organizations enlisted to join the “war on misinformation” back in 2021, when APA took a $2 million grant from the Centers for Disease Control and Prevention (CDC) to help push the Covid narratives of the time.

APA’s particular task there was to come up with “a scientific consensus statement on the science of misinformation.”

Now, APA is clamoring for even more federal money as it declares psychology to be “leading the way on fighting misinformation” and advertises psychologists as the right people to research the problem (as it has been presented over the last years), and also be “part of the solution.”

An article on APA’s site doesn’t shy away from using terminology that spreads a sense of alarm, such as “the scourge of misinformation” and asserting that clinicians now have to treat patients “subsumed” by conspiracy theories, while institutions and communities are all allegedly suffering unspecified “harm.”

And APA also doesn’t shy away from mentioning the US presidential election, or from positioning that event as something that makes combating misinformation “messier and more important than ever.”

Messy it is, alright. To position itself properly among all those vying for funding/influence by exaggerating the threat posed by misinformation as a new phenomenon, APA actually states that, with the election in mind, fighting misinformation is “one of the top trends facing the field (physiology) in 2024.”

Really, APA? Maybe the author meant – a top trend faced by the organization itself, since it has had to show something in return for the $2 million 2021 CDC grant given to it to research “the science of stopping misinformation.”

(Spoiler: that “science” is already well-developed and applied; it’s called censorship.)

Keep reading

Several Large Accounts That Criticized Israel and Musk Banned from X

A group of journalists and political commentators with large followings were banned on X without notice. The owners of the banned profiles pointed to criticisms of Israel and Elon Musk. 

On Tuesday morning, the accounts of Alan MacLeodKen KlippensteinRob Rousseau, the True Anon Podcast, Steven Monacelli, and an anonymous account @Zei_Squrill, were all banned on the platform owned by Musk. 

MacLeod, who had over 200,000 followers on X, posted on Telegram that the suspension came without warning. “Today, without warning or explanation, Twitter suspended my account, @AlanRMacLeod. They told me to check my email for a reason, but no email has been forthcoming,” the journalist wrote. “I have never even remotely been involved in any controversy/been reported/been stuck in Twitter jail before, so I assume the real reason is political, especially as high-profile leftist accounts like Rob Rousseau and Zei_Squirrel were also targeted today.”

In a statement provided to The Libertarian Institute, MacLeod said, “I’m deeply concerned about Twitter banning a host of influential anti-war accounts today, including my own. It is a sign that Elon Musk’s supposed passion for free speech might not be all that it seems.”

Musk acquired Twitter in October of 2022 and later renamed the platform X. At the time, he said he aimed to make Twitter a “platform for free speech around the globe.” 

In one of his first acts as owner of Twitter, he allowed Matt Taibbi and other journalists to access the business communications of the company’s leadership. In the Twitter Files, Taibbi exposed a coordinated effort between the government and Twitter to censor speech that countered the establishment narrative on the election, covid, and the war in Ukraine. 

However, as America has entered an election year, Musk-owned X is stepping up its censorship efforts. 

Keep reading

The Big Flaws in That Study Suggesting That China Manipulates TikTok Topics

The latest wave of fearmongering about TikTok involves a study purportedly showing that the app suppresses content unflattering to China. The study attracted a lot of coverage in the American media, with some declaring it all the more reason to ban the video-sharing app.

“Hopefully members of Congress will take a look at this report and maybe bring the authors to Washington to give testimony about their findings,” wrote John Sexton at Hot Air. The study “suggests that the next generation will have had a significant portion of their news content spoon fed to them by a communist dictatorship,” fretted Leon Wolf at Blaze Media. “TikTok suppression study is another reason to ban the app,” declared a Washington Examiner editorial.

But there are serious flaws in the study design that undermine its conclusions and any panicky takeaways from them.

In the study, the Network Contagion Research Institute (NCRI) compared the use of specific hashtags on Instagram (owned by the U.S. company Meta) and on TikTok (owned by the Chinese company ByteDance). The analysis included hashtags related both to general subjects and to “China sensitive topics” such as Uyghurs, Tibet, and Tiananmen Square. “While ratios for non-sensitive topics (e.g., general political and pop-culture) generally followed user ratios (~2:1), ratios for topics sensitive to the Chinese Government were much higher (>10:1),” states the report, titled “A Tik-Tok-ing Timebomb: How TikTok’s Global Platform Anomalies Align with the Chinese Communist Party’s Geostrategic Objectives.”

The study concludes that there is “a strong possibility that TikTok systematically promotes or demotes content on the basis of whether it is aligned with or opposed to the interests of the Chinese Government.”

There are ample reasons to be skeptical of this conclusion. Paul Matzko pointed out some of these in a recent Cato Institute blog post, identifying “two remarkably basic errors that call into question the fundamental utility of the report.”

The errors are so glaring that it’s hard not to suspect an underlying agenda at work here.

Keep reading

Media Outlets Are Already Calling for Online 2024 Election Censorship

The page has only just been turned on 2023 and already the narrative that much policing of online speech will be vital for 2024, an election year, has already stirred.

The legacy media outlet The Guardian, in its piece about Kate Starbird, has already complained that there may be less censorship ahead of the 2024 elections, and claimed that Rep. Jim Jordan’s committee’s reports on Big Tech-government censorship collusion are based on “outlandish claims.” This is ignoring the fact that an injunction was successfully placed on the Biden administration for its censorship pressure on Big Tech, a case that will be ruled on by The Supreme Court this year.

In an era where the policing of online speech is increasingly contentious, Kate Starbird’s role in combating what she terms election misinformation has placed her squarely in the midst of a heated debate. As a leading figure at the University of Washington’s Center for an Informed Public, Starbird has actively engaged in documenting what she and her team perceive as misinformation during the 2020 elections, particularly focusing on claims of voter fraud.

However, Starbird’s approach and her team’s actions have not been without controversy. Critics argue that their efforts amount to a form of censorship, infringing upon free speech. This criticism extends beyond Starbird’s team to a broader national trend, where researchers engaged in similar work face accusations of partisanship and censorship, challenging the principles of free expression.

Jim Jordan, chair of the House judiciary committee, has emerged as a key figure in opposing what he views as the overreach of these researchers. He has focused on investigating groups and individuals involved in counteracting misinformation, especially in the context of elections and Covid-19. Central to the controversy is the practice of working with government entities and flagging content to social media platforms, which some argue leads to undue censorship and violates First Amendment rights.

The debate over the role of anti-misinformation efforts has escalated beyond Congress, evidenced by lawsuits from the attorneys general of Missouri and Louisiana and from the state of Texas, along with two rightwing media companies. These legal actions challenge the alleged collaboration between the Biden administration, the Global Engagement Center, and social media companies, showing it as a constitutional breach.

Keep reading

Google’s New Patent: Using Machine Learning to Identify “Misinformation” on Social Media

Google has filed an application with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media.

Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward.

The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there.

Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.”

But it seems that Google is developing the tool with other platforms in mind.

The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.”

Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at.

(Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.)

The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data.

Keep reading

Substackers Battle Over Banning Nazis

Once again, we’re debating about “platforming Nazis,” following the publication of an article in The Atlantic titled “Substack Has a Nazi Problem” and a campaign by some Substack writers to see some offensive accounts given the boot. And once again, the side calling for more content suppression is short-sighted and wrong. 

This is far from the first time we’ve been here. It seems every big social media platform has been pressured to ban bigoted or otherwise offensive accounts. And Substack—everyone’s favorite platform for pretending like it’s 2005 and we’re all bloggers again—has already come under fire multiple times for its moderation policies (or lack thereof).

Substack differs from blogging systems of yore in some key ways: It’s set up primarily for emailed content (largely newsletters but also podcasts and videos), it has paid some writers directly at times, and it provides an easy way for any creator to monetize content by soliciting fees directly from their audience rather than running ads. But it’s similar to predecessors like WordPress and Blogger in some key ways, also—and more similar to such platforms than to social media sites such as Instagram or X (formerly Twitter). For instance, unlike on algorithm-driven social media platforms, Substack readers opt into receiving posts from specific creators, are guaranteed to get emailed those posts, and will not receive random content to which they didn’t subscribe.

Substack is also similar to old-school blogging platforms in that it’s less heavy-handed with moderation. On the likes of Facebook, X, and other social media platforms, there are tons of rules about what kinds of things you are and aren’t allowed to post and elaborate systems for reporting and moderating possibly verboten content. 

Substack has some rules, but they’re pretty broad—nothing illegal, no inciting violence, no plagiarism, no spam, and no porn (nonpornographic nudity is OK, however).

Substack’s somewhat more laissez faire attitude toward moderation irks people who think every tech company should be in the business of deciding which viewpoints are worth hearing, which businesses should exist, and which groups should be allowed to speak online. To this censorial crew, tech companies shouldn’t be neutral providers of services like web hosting, newsletter management, or payment processing. Rather, they must evaluate the moral worth of every single customer or user and deny services to those found lacking.

Keep reading

EU Hits Musk With X Probe On Possible ‘Disinformation’

European Commissioner Thierry Breton announced an investigation into Elon Musk’s ‘free speech’ social media platform X for failure to combat ‘illicit content and disinformation.’ This is the first major probe the EU has opened up on X since last year’s passing of a new law called the “Digital Services Act.” 

“Today we open formal infringement proceedings against @X” under the Digital Services Act, European Commissioner Breton wrote in a post on Monday morning on X. 

“The Commission will now investigate X’s systems and policies related to certain suspected infringements,” spokesman Johannes Bahrke told reporters in Brussels, adding, “It does not prejudge the outcome of the investigation.”

The investigation is centered on whether X failed to stop the spread of ‘illegal content’ (in other words, non-approved government narratives) and whether the Community Notes feature is enough to combat “information manipulation.” 

An investigation into X’s business practices was signaled by the EU as early as October, following the Israel-Gaza conflict in which officials warned that “terrorist and violent content and hate speech” was spreading on the social media platform. 

“The time of big online platforms behaving like they are ‘too big to care’ has come to an end,” Breton stated. 

Keep reading

Deepfake Society: 74% of Americans Can’t Tell What’s Real or Fake Online Anymore

Americans believe only 37 percent of the content they see on social media is “real,” or free of edits, filters, and Photoshop. Between AI and “deepfake” videos — a survey of 2,000 adults split evenly by generation reveals that almost three-quarters (74%) can’t even tell what’s real or fake anymore.

Americans are wary of both targeted ads (14%) and influencer content (18%), but a little more than half (52%) find themselves equally likely to question the legitimacy of either one. This goes beyond social media and what’s happening online. The survey finds that while 41 percent have more difficulty determining if an item they’re looking to purchase online is “real” or “a dupe,” another 36 percent find shopping in person to be just as challenging.

While the average respondent will spend about 15 minutes determining if an item is “real,” meaning a genuine model or a knockoff, millennials take it a step further and will spend upwards of 20 minutes trying to decide.

Conducted by OnePoll on behalf of De Beers Group, results reveal that Americans already own a plethora of both real and fake products.

Keep reading