The smoking gun proving U.S. involvement in the 2014 coup in Kiev has been removed from YouTube after eight years.
It was one of the most watched versions of the intercepted and leaked conversation between then Assistant Secretary of State Victoria Nuland and Geoffrey Pyatt, the then U.S. ambassador to Ukraine, in which the two discuss who will make up the new government weeks before democratically-elected Ukrainian President Viktor Yanukovych was overthrown in a violent coup on Feb. 21, 2014.
The two talk about “midwifing” the unconstitutional change of government and “gluing it together” and of the role then Vice President Joe Biden should play and what meetings to set up with Ukrainian politicians.
The U.S. State Department never denied the authenticity of the video, and even issued an apology to the European Union after Nuland is heard on the tape saying, “Fuck the E.U.” Mainstream media at the time focused almost exclusively on that off-color remark, ignoring the greater significance of U.S. interference in Ukraine’s internal affairs.
NetChoice v. Paxton—the lawsuit that may determine the fate of free speech on social media platforms—has taken a dramatic turn. Just short of two weeks ago, the large platforms—including the likes of Amazon, Google, Twitter, and Facebook, all acting through their trade group, NetChoice—made an “emergency application” to Justice Samuel Alito.
This sort of application is familiar in cases involving grave harm, such as an execution. But is there really any risk of such harm or other emergency in this case? Or are the platforms trying to bum rush the Supreme Court so as to sidestep the ordinary course of judicial inquiry? The Supreme Court needs to be careful that it is not being manipulated.
The case arises out of the Texas free speech statute that bars the largest social media platforms from discriminating on the basis of viewpoint. In response, the platforms claim their censorship of speech is protected by the First Amendment.
Texas counters that they are common carriers, which serve as conduits for other people’s speech, and so can be barred from discriminating on the basis of viewpoint. In other words, the platforms are not being restricted in their own speech, but only barred from discriminating against the speech of others that they carry in their conduits.
These are complex questions, and even the slightest hint from the Supreme Court as to its answers will have outsize implications in the courts below. It therefore is disturbing that the platforms, speaking through NetChoice, have asked the court to take a position in a rushed “emergency application.” Such portentous questions should not be decided in a hurry. So why do the platforms want them resolved in proceedings that were briefed on only a few days’ notice?
At the World Economic Forum Annual Meeting for 2022, an event where powerful CEOs and world leaders meet to “find solutions to the world’s most urgent challenges,” YouTube CEO Susan Wojcicki committed to persistent censorship of “misinformation” and praised YouTube’s existing censorship efforts.
Wojcicki made the comments after Alyson Shontell Lombardi, the Editor-in-Chief of Fortune Magazine, asked her whether YouTube’s efforts to censor misinformation will always be a “work in progress.”
“I think there’ll always be work that we have to do because there will always be incentives for people to be creating misinformation,” Wojcicki said. “The challenge will be to keep staying ahead of that and make sure that we are understanding what they are and the different ways that people may use to try to trick our systems and make sure that our systems are staying ahead of what’s necessary to make sure that we are managing that.”
Wojcicki continued by praising YouTube’s 5-6 year initiative of cracking down on content that’s deemed to be misinformation and said that users who look at YouTube search results or the homepage will see content from “authoritative sources” (mainstream media outlets that YouTube designates as authoritative) for “sensitive topics.”
We’re in the final countdown to British Home Secretary Priti Patel’s decision on the fate of Julian Assange, with the WikiLeaks founder’s extradition to the United States due to be approved or rejected by the end of the month. Joe Lauria has a new article out with Consortium News on the various pressures that Patel is being faced with from both sides of this history-making issue at this crucial time.
And I can’t stop thinking, as this situation comes to a boil, about how absurd it is that the US empire is working to set a precedent which essentially outlaws information-sharing that the US doesn’t like at the same time western news media are full of hand-wringing headlines about the dangerous threat of “disinformation”.
Fairness and Accuracy In Reporting (FAIR) has an article out titled “‘Disinformation’ Label Serves to Marginalize Crucial Ukraine Facts” about the way the mass media have been spinning that label to mean not merely the knowing distribution of false information but also of information that is true but inconvenient to imperial narrative-weaving.
“In defense of the US narrative, corporate media have increasingly taken to branding realities inconvenient to US information goals as ‘disinformation’ spread by Russia or its proxies,” writes FAIR’s Luca Goldmansour.
Online platforms have been ramping up their censorship protocols under the banner of fighting disinformation and misinformation, and those escalations always align with narrative control agendas of the US-centralized empire. Just the other day we learned that Twitter has a new policy which expands its censorship practices to fight “misinformation” about wars and other crises, and the Ukraine war (surprise surprise) will be the first such situation about which it will be enforcing these new censorship policies.
Then there’s the recent controversy over the Department of Homeland Security’s “Disinformation Governance Board,” a mysterious institution ostensibly designed to protect the American people from wrongthink coming from Russia and elsewhere. The board’s operations (whatever they were) have been “paused” pending a review which will be led by Michael Chertoff, a virulent swamp monster and torture advocate. Its operations will likely be resumed in one form or another, probably under the leadership of someone with a low profile who doesn’t sing show tunes about disinformation.
And this all comes out after US officials straight up told the press that the Biden administration has been deliberately sowing disinformation to the public using the mainstream press in order to win an infowar against the Kremlin. They’ve literally just been circulating completely baseless stories about Russia and Ukraine, but nobody seems to be calling for the social media accounts of Biden administration officials to be banned.
Twitter’s pick to stop the spread of misinformation in times of crisis has a history of pushing falsehoods.
Yoel Roth, the head of Twitter’s safety and integrity unit, unveiled the site’s “crisis misinformation policy” on Thursday. In a blog post, Roth outlined how Twitter will place warning labels on tweets deemed to contain misinformation and prevent them from being “amplified or recommended” in times of armed conflict, natural disasters, or public health emergencies.
Roth is a questionable pick to launch the policy, given his own track record with misinformation. Roth oversaw Twitter’s decision to block the sharing of an October 2020 New York Post report on emails from Hunter Biden’s abandoned laptop. Roth told the Federal Election Commission he made the decision based on “rumors” shared by the United States government’s intelligence community that the Russian government might release materials hacked from Hunter Biden.
There has been no credible evidence that Biden’s laptop was hacked, or that Russia played a role in publishing emails from it. Former Twitter CEO Jack Dorsey later admitted that blocking the article was “a total mistake.”
Roth came under fire earlier in 2020 for referring to Trump officials as “actual Nazis” in a 2017 tweet. He also called Senate Minority Leader Mitch McConnell (R., Ky.) a “bag of farts.”
The House of Representatives has voted to pass a resolution that calls for increased surveillance and censorship of online speech, to help reduce antisemitism.
The resolution goes beyond condemning antisemitism; it goes into the realm of calling on social media platforms to do more to stop it.
We obtained a copy of the resolution for you here.
The resolution calls on social media platforms to “institute stronger and more significant efforts to measure and address online antisemitism” and, like most resolutions of this kind, pays lip-service to the idea of “protecting free speech concerns,” without providing details on how this is possible.
The resolution also calls for the house to work “in tandem with the cross-party Inter-parliamentary Task Force to Combat Online Anti- semitism to help craft thoughtful global initiatives designed to address online antisemitism.”
The resolution names platforms specifically, saying there has been an uptick in “antisemitic language, conspiracy theories, and hatred has increased on multiple social media platforms—from Facebook and Instagram to Twitter and TikTok.”
Rep. Thomas Massie, a Republican, was the only member of the House that recognized the implications of government once again trying to insert themselves into moderation on online platforms and voted against the bill on free speech grounds.
Twitter has published what it calls a “crisis misinformation policy” announcing that it will be actively reducing the visibility of content found to be false which pertains to “situations of armed conflict, public health emergencies, and large-scale natural disasters.”
If you’ve been paying attention to the dramatic escalations in online censorship we’ve been seeing in 2022, it will not surprise you to learn that the Ukraine war is the first crisis to which this new censorship policy will be applied.
Twitter says that it “won’t amplify or recommend content” found to violate its new policy, and will also attach warning labels to individual tweets and even hide offending content behind a warning label and disable the retweet function on particularly naughty posts.
The problem here is of course the question of how to impartially establish whether something is objectively false without it turning into at best a flawed system guided by fallible human biases and perceptual filters and at worst a powerful institution shutting down unauthorized speech. Twitter says it formed its new policy with input from unnamed “global experts and human rights organizations,” and will be enforcing it with the help of “conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more.” This will come as no comfort to anyone who’s familiar with the history of propaganda peddling that can be found in every single one of those respective categories.
Just one day after the White House confirmed that the Department of Homeland Security’s Disinformation Governance Board had been put on hold — and that board chief Nina Jankowicz had resigned — Twitter announced plans to implement a new “crisis misinformation policy.”
The plan, according to a tweet from @TwitterSafety, would be implemented to ensure that Twitter as a platform did not amplify or contribute to the spread of misinformation.
“People turn to Twitter during crisis times to share news, find support, and stay connected. Today, we’re launching a crisis misinformation policy so Twitter doesn’t recommend or amplify viral, false content that can further harm already vulnerable groups,” the tweet read.
“We’ve been refining our approach to crisis misinformation, drawing on input from global experts and human rights organizations. As part of this new framework, we’ll start adding warning notices on high visibility misleading Tweets related to the war in Ukraine,” the thread continued.
In a newly released video clip, Biden disinformation czar Nina Jankowicz demands that “trustworthy verified people” like her be given the power to edit other people’s tweets, making Twitter more like Wikipedia.
Asserting that she was “eligible for it because I’m verified,” Jankowicz then bemoaned the fact there are people on Twitter with different opinions to her who also have the blue tick but “shouldn’t be verified” because they’re “not trustworthy.”
“So verified people can essentially start to edit Twitter the same sort of way that Wikipedia is so they can add context to certain tweets,” said Jankowicz.
She then provided the example, which she claimed was non-political, of President Trump tweeting about voter fraud.
“Someone could add context from one of the 60 lawsuits that went through the court or something that an election official in one of the states said, perhaps your own Secretary of State and his news conferences, something like that,” said Jankowicz.
“Adding context so that people have a fuller picture rather than just an individual claim on a tweet,” she added.
Emory Law School’s Student Bar Association recently refused to recognize a free speech debate group because open discussions may spark “harm” and potentially “give rise to a precarious environment.”
As an alumnus of Emory Law, the decision was especially disappointing — but not surprising — and thankfully, overturned.
But a similar instance of an Emory University group retaliating against protected free speech came in October 2020, during the first year of my attending Emory Law’s Juris Master program.
The Emory University College Republicans hosted Heather Mac Donald, a senior fellow at the Manhattan Institute and author of “The Diversity Delusion: How Race and Gender Pandering Corrupt the University and Undermine Our Culture.”
A “safe space” was established by Black and Latino groups at the university, seeking to “express their reaction to what McDonald had to say,” according to a post by the Emory NAACP.