Nuland-Pyatt Tape Removed From YouTube After 8 Years

The smoking gun proving U.S. involvement in the 2014 coup in Kiev has been removed from YouTube after eight years. 

It was one of the most watched versions of the intercepted and leaked conversation between then Assistant Secretary of State Victoria Nuland and Geoffrey Pyatt, the then U.S. ambassador to Ukraine, in which the two discuss who will make up the new government weeks before democratically-elected Ukrainian President Viktor Yanukovych was overthrown in a violent coup on Feb. 21, 2014.

The two talk about “midwifing” the unconstitutional change of government and “gluing it together” and of the role then Vice President Joe Biden should play and what meetings to set up with Ukrainian politicians.

The U.S. State Department never denied the authenticity of the video, and even issued an apology to the European Union after Nuland is heard on the tape saying, “Fuck the E.U.” Mainstream media at the time focused almost exclusively on that off-color remark, ignoring the greater significance of U.S. interference in Ukraine’s internal affairs. 

Keep reading

Is Big Tech Bum Rushing The Supreme Court On Censorship?

NetChoice v. Paxton—the lawsuit that may determine the fate of free speech on social media platforms—has taken a dramatic turn. Just short of two weeks ago, the large platforms—including the likes of Amazon, Google, Twitter, and Facebook, all acting through their trade group, NetChoice—made an “emergency application” to Justice Samuel Alito.

This sort of application is familiar in cases involving grave harm, such as an execution. But is there really any risk of such harm or other emergency in this case? Or are the platforms trying to bum rush the Supreme Court so as to sidestep the ordinary course of judicial inquiry? The Supreme Court needs to be careful that it is not being manipulated.

The case arises out of the Texas free speech statute that bars the largest social media platforms from discriminating on the basis of viewpoint. In response, the platforms claim their censorship of speech is protected by the First Amendment.

Texas counters that they are common carriers, which serve as conduits for other people’s speech, and so can be barred from discriminating on the basis of viewpoint. In other words, the platforms are not being restricted in their own speech, but only barred from discriminating against the speech of others that they carry in their conduits.

These are complex questions, and even the slightest hint from the Supreme Court as to its answers will have outsize implications in the courts below. It therefore is disturbing that the platforms, speaking through NetChoice, have asked the court to take a position in a rushed “emergency application.” Such portentous questions should not be decided in a hurry. So why do the platforms want them resolved in proceedings that were briefed on only a few days’ notice?

Keep reading

YouTube CEO at World Economic Forum: “There’ll always be work that we have to do” to censor “misinformation”

At the World Economic Forum Annual Meeting for 2022, an event where powerful CEOs and world leaders meet to “find solutions to the world’s most urgent challenges,” YouTube CEO Susan Wojcicki committed to persistent censorship of “misinformation” and praised YouTube’s existing censorship efforts.

Wojcicki made the comments after Alyson Shontell Lombardi, the Editor-in-Chief of Fortune Magazine, asked her whether YouTube’s efforts to censor misinformation will always be a “work in progress.”

“I think there’ll always be work that we have to do because there will always be incentives for people to be creating misinformation,” Wojcicki said. “The challenge will be to keep staying ahead of that and make sure that we are understanding what they are and the different ways that people may use to try to trick our systems and make sure that our systems are staying ahead of what’s necessary to make sure that we are managing that.”

Wojcicki continued by praising YouTube’s 5-6 year initiative of cracking down on content that’s deemed to be misinformation and said that users who look at YouTube search results or the homepage will see content from “authoritative sources” (mainstream media outlets that YouTube designates as authoritative) for “sensitive topics.”

Keep reading

Meet the Conspiracy Theorist Behind Twitter’s ‘Crisis Misinformation Policy’

Twitter’s pick to stop the spread of misinformation in times of crisis has a history of pushing falsehoods.

Yoel Roth, the head of Twitter’s safety and integrity unit, unveiled the site’s “crisis misinformation policy” on Thursday. In a blog post, Roth outlined how Twitter will place warning labels on tweets deemed to contain misinformation and prevent them from being “amplified or recommended” in times of armed conflict, natural disasters, or public health emergencies.

Roth is a questionable pick to launch the policy, given his own track record with misinformation. Roth oversaw Twitter’s decision to block the sharing of an October 2020 New York Post report on emails from Hunter Biden’s abandoned laptop. Roth told the Federal Election Commission he made the decision based on “rumors” shared by the United States government’s intelligence community that the Russian government might release materials hacked from Hunter Biden.

There has been no credible evidence that Biden’s laptop was hacked, or that Russia played a role in publishing emails from it. Former Twitter CEO Jack Dorsey later admitted that blocking the article was “a total mistake.”

Roth came under fire earlier in 2020 for referring to Trump officials as “actual Nazis” in a 2017 tweet. He also called Senate Minority Leader Mitch McConnell (R., Ky.) a “bag of farts.”

Keep reading

Twitter To Ramp Up Censorship Of ‘Misinformation’ About The Ukraine War

Twitter has published what it calls a “crisis misinformation policy” announcing that it will be actively reducing the visibility of content found to be false which pertains to “situations of armed conflict, public health emergencies, and large-scale natural disasters.”

If you’ve been paying attention to the dramatic escalations in online censorship we’ve been seeing in 2022, it will not surprise you to learn that the Ukraine war is the first crisis to which this new censorship policy will be applied.

Twitter says that it “won’t amplify or recommend content” found to violate its new policy, and will also attach warning labels to individual tweets and even hide offending content behind a warning label and disable the retweet function on particularly naughty posts.

The problem here is of course the question of how to impartially establish whether something is objectively false without it turning into at best a flawed system guided by fallible human biases and perceptual filters and at worst a powerful institution shutting down unauthorized speech. Twitter says it formed its new policy with input from unnamed “global experts and human rights organizations,” and will be enforcing it with the help of “conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more.” This will come as no comfort to anyone who’s familiar with the history of propaganda peddling that can be found in every single one of those respective categories.

Keep reading

‘They’re Going To Get Their Ministry Of Truth One Way Or The Other’: Twitter Announces ‘Crisis Misinformation Policy’

Just one day after the White House confirmed that the Department of Homeland Security’s Disinformation Governance Board had been put on hold — and that board chief Nina Jankowicz had resigned — Twitter announced plans to implement a new “crisis misinformation policy.”

The plan, according to a tweet from @TwitterSafety, would be implemented to ensure that Twitter as a platform did not amplify or contribute to the spread of misinformation.

“People turn to Twitter during crisis times to share news, find support, and stay connected. Today, we’re launching a crisis misinformation policy so Twitter doesn’t recommend or amplify viral, false content that can further harm already vulnerable groups,” the tweet read.

“We’ve been refining our approach to crisis misinformation, drawing on input from global experts and human rights organizations. As part of this new framework, we’ll start adding warning notices on high visibility misleading Tweets related to the war in Ukraine,” the thread continued.

Keep reading

Half of Joe Biden’s Twitter Followers Are Fake, Audit Reveals

Almost half of President Joe Biden‘s current 22.2 million followers on Twitter are fake accounts, according to an audit tool provided by software company SparkToro.

SparkToro’s tool found that 49.3 percent of accounts following the official @POTUS Twitter account are “fake followers” based on analysis of a number of factors, including location issues, default profile images and new users.

Tesla CEO Elon Musk, who is currently trying to buy Twitter, has expressed concerns about the number of fake accounts and a potential crack down could see users such as Biden lose a huge number of followers.

Musk said early on Tuesday that his $44-billion deal to purchase the social media company could not go ahead until issues with fake accounts were resolved and he appeared to criticize Twitter CEO Parag Agrawal.

Keep reading

CAMBRIDGE ANALYTICA REBORN? PRIVATE SPY AGENCY WEAPONIZES FACEBOOK AGAIN

On April 4, plaintiffs in a class action lawsuit brought against Facebook over its data-sharing practices following the eruption of the Cambridge Analytica scandal filed a fresh motion, charging that the social media giant deliberately obstructed discovery of information revealing the scale of its malfeasance.

It’s the latest development in a wide-ranging controversy that began in the first months of 2017 and shows little sign of abating. In brief, Cambridge Analytica exploited a Facebook loophole to harvest the personal data of up to 50 million Americans, in order to manipulate voters on behalf of a number of right-wing candidates — potentially including Donald Trump — and political campaigns in the U.S. and elsewhere.

Since then, the company and its parent, SCL Group, have folded, with official investigations into their activities conducted in several countries, while Facebook has been fined a record $5 billion by the Federal Trade Commission for egregious breaches of user confidentiality. The entire dispute raised serious public concerns about online privacy and the malign influence of behavioral advertising and microtargeting, which endure to this day.

In September 2020, Cambridge Analytica’s former CEO, Alexander Nix, was disqualified from serving as a U.K. company director for seven years for offering unethical services, including “bribery or honey-trap stings, voter disengagement campaigns, obtaining information to discredit political opponents and spreading information anonymously in political campaigns.”

By contrast, one senior SCL staffer seemingly pivotal to many of those unethical practices – although they deny it — has been unaffected by the scandal’s fallout. In fact, they have profited and prospered immensely in its wake.

Keep reading

Delete your personal data from Google

Google now has a new tool allowing anyone to request the removal of their personal data from search results, including contact information.

“The availability of personal contact information online can be jarring,” said Google’s head of global policy in search Michelle Chang. She added that personal data could lead to “unwanted direct contact or even physical harm.”

Google already allowed the removal of personal or financial information from search results if a user could prove it was real danger or a potential threat. Now you can request the removal of your information even if there is no risk.

Keep reading

Why Does YouTube Host This Channel That Teaches Kids About Porn And Abortion?

Amaze Org is a predatory YouTube channel that says it aims to “take the awkward” out of sex education for kids and boasts about its age-appropriate content for this digital generation of children. It has more than 220,000 subscribers and its free videos have a combined total of more than 60 million views.  

Why is YouTube allowing this organization to push its sexual agenda on kids? YouTube’s content policy clearly states, “Content that targets young minors and families but contains sexual themes, violence, obscene, or other mature themes not suitable for young audiences, is not allowed on YouTube.”

On Amaze Org’s about page, the organization says its mission is “to provide young adolescents around the globe with medically accurate, age-appropriate, affirming, and honest sex education they can access directly online.” They also provide curricula for schools, parents, and “allies.”

Their videos can be accessed on their website, YouTube, Instagram, TikTok, and Snapchat. The topics covered in the videos are gender identity, sexuality, abortion, birth control, puberty, masturbation, pornography, abortion, and more. It’s all explicit and not appropriate for children.

Keep reading