Facebook’s enforcement report reveals AI is deleting 97 PERCENT of ‘hate speech’ before anyone reports it

Facebook has patted itself on the back for nuking almost all “hate speech” that supposedly violated its rules. But not only was most content deleted before anyone could flag it, users weren’t even allowed to appeal most deletions.

Unveiling its Community Standards Enforcement Report for the fourth quarter of 2020 on Thursday, Facebook bragged that its expanded use of artificial intelligence had helped it delete almost twice as much “bullying and harassment” content as the previous quarter, just one of several categories in which removals skyrocketed, while its Instagram subsidiary dramatically expanded its ability to catch suicide and self-injury related content.

Facebook axed 6.3 million bullying items, nearly doubling last quarter’s 3.5 million and assisted in large part by its AI technology. Expanded translation ability helped it remove 26.9 million pieces of “hate speech” content, up from 22.1 million in the third quarter. And Instagram nabbed 6.6 million pieces of hate speech while more than doubling the amount of suicide and self-harm content it removed – from 1.3 million to 3.4 million this quarter. 

Keep reading

Twitter Says Lincoln Project Cofounder John Weaver Didn’t Violate Rules on Unwanted Sexual Advances

Twitter says that disgraced Lincoln Project cofounder John Weaver did not violate its rules on unwanted sexual advances when he allegedly used the platform’s direct messaging system to sexually proposition young men.

Twitter’s rules against unwanted sexual advances prohibit:

  • unwanted sexual discussion of someone’s body
  • solicitation of sexual act; and
  • any other content that otherwise sexualizes an individual without their consent.

A Twitter spokesman told Breitbart News that the platform had looked into the matter and found no violations of its policies.

Over twenty young men have said John Weaver, who still has an active account on the platform, used Twitter’s direct messages feature to send them unwanted sexual advances.

Many screenshots of these direct messages have been published. One of the alleged victims was 14 years old, according to the New York Times.

Keep reading

Facebook says new algorithm will ‘reduce political content’ on news feeds

Facebook announced on Wednesday the social media platform will in the coming weeks start limiting the amount of political content viewers see on their news feeds.

The company is aware that “people don’t want political content to take over their News Feed,” Product Management Director Aastha Gupta wrote in a blog post on the site.

The change will begin with Facebook temporarily reduce the distribution of political content in News Feed for a small percentage of people in Canada, Brazil and Indonesia this week.

Gupta said the process will begin in the U.S. in the coming weeks.

The initial rollout will allow the company to explore different methods of ranking political content prior to its deciding on a permanent solution.

Keep reading

Facebook, the ADL and the Brewing Battle to Label Zionism as Hate Speech

Zionism is one word, in particular, that evokes intense and passionate debate as all ideologies do. But, now the term is coming under scrutiny after an “innocuous” email shed light on Facebook’s response to concerted action by a powerful Jewish rights group that led to new community guidelines curbing so-called “hate speech.”

The leaked email brought attention to a discussion ostensibly taking place inside the multi-billion-dollar company to design its censorship algorithms and moderator criteria according to the wishes of the Anti-Defamation League (ADL); posing the question of whether moderators should interpret the word as a slur against Jewish people, in general, or just Israelis.

Keep reading

Tulsi Gabbard tells Steven Crowder: Big tech does not get to decide who has a voice and who doesn’t

Gabbard — who recently warned that “domestic enemies” in big tech and the national security community, such as Rep. Adam Schiff (D-Calif.) and former CIA Director John Brennan, are plotting to create a “police state” in America — argued Monday that big tech’s threat to free speech is one of the most dangerous issues facing the country.

For years, giant tech companies such Facebook, Twitter, and YouTube have been scrutinized for arbitrarily censoring speech without consequence. But in recent months, the debate over their power has intensified amid the decision by several companies to ban former President Donald Trump from the platforms.

“Fundamentally as Americans, we agree on our constitutional right to free speech, [but now we] have these big tech monopolies essentially deciding who has a voice and who doesn’t in these virtual public town squares that they’ve created,” Gabbard lamented, adding, “You also have people in great positions of power in our government, for partisan or political reasons, trying to decide who gets to be heard and who doesn’t, just further inflaming the divisiveness and really, truly undermining our constitutional rights.”

“When we look at big tech and their ability to essentially act with impunity to do whatever they want — and making billions of dollars in the process — it speaks to the very dangerous place we are as a country,” she continued.

Gabbard called Trump’s removal from social media platforms a major indicator of how “dangerously powerful these big tech monopolies have become.”

Keep reading

DHS Sued Over Its Social Media Surveillance Tactics

On paper at least, when a federal agency receives a FOIA request, it’s required to respond with either a denial or a so-called “grant of access” within the span of 20 business days. As the CDT points out in its suit, even if every requested document can’t be released in this time frame, at the very least the agency should notify which documents are on the table, which are being withheld, and give the party asking for these docs the right to appeal these decisions.

By that rationale, when the CDT filed its initial FOIA request in mid-August 2019, it should have heard a response sometime in mid-September. Instead, it alleges that it hasn’t gotten a substantial request to date. Even USCIS—the only agency to offer any sort of timeline for wrangling these requested documents—initially estimated it would take until the end of December. In the 13 months since its self-set deadline, the CDT alleges the agency hasn’t returned any of the records requested.

“The public deserves to know how the government scrutinizes social media data when deciding who can enter or stay in the country,” said CDT General Counsel Avery Gardiner in a statement. “Government surveillance has necessary limits, particularly constitutional ones.”

Keep reading

Facebook’s new expert on ‘online disinformation’, Ben Nimmo, was a fantasy fiction writer. Has he really given that up?

The tech giant’s self-styled ‘troll-finder-general’ is touted as an authority on alleged Russian ‘information warfare’ – but any objective look at his background and track record raises troubling questions over his capabilities.

On February 5, Ben Nimmo announced he will with immediate effect be joining Facebook, to help the social media monopoly “lead global threat intelligence strategy against influence operations.”

It’s a shocking development, yet somehow an entirely unsurprising one. After all, despite having less than no discernible professional or educational background in social media, data analysis, information technology, or digital research, in recent years he’s enjoyed a stratospheric rise to mainstream prominence as an expert on online “disinformation,” and a series of well-remunerated posts at a number of state-backed and quasi-state organizations. 

Keep reading

Facebook to BAN claims about ‘man-made’ Covid-19 & ‘unsafe’ vaccines as it launches election-like campaign to promote vaccination

Facebook is expanding the list of ‘false’ and ‘debunked’ claims about the coronavirus and vaccines that will be grounds for banning from the platform, while launching the largest ‘authoritative’ vaccination campaign worldwide.

Under the Community Standards policy, posts with “debunked claims” that Covid-19 is “man-made or manufactured,” or that vaccines are ineffective, unsafe, dangerous or cause autism will be removed starting Monday, VP of Integrity Guy Rosen announced on the Facebook blog.

The new policy was implemented following consultations with the World Health Organization (WHO) and others, and will help Facebook “continue to take aggressive action against misinformation” about Covid-19 and vaccines, Rosen added.

Even if they don’t violate any of the listed policies, posts about Covid-19 or vaccines will still be subject to review by “third-party fact-checkers” and labeled and “demoted” if rated false.

Keep reading

This is how we lost control of our faces

In 1964, mathematician and computer scientist Woodrow Bledsoe first attempted the task of matching suspects’ faces to mugshots. He measured out the distances between different facial features in printed photographs and fed them into a computer program. His rudimentary successes would set off decades of research into teaching machines to recognize human faces.

Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent.

Deborah Raji, a fellow at nonprofit Mozilla, and Genevieve Fried, who advises members of the US Congress on algorithmic accountability, examined over 130 facial-recognition data sets compiled over 43 years. They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people’s consent. This has led more and more of people’s personal photos to be incorporated into systems of surveillance without their knowledge.

It has also led to far messier data sets: they may unintentionally include photos of minors, use racist and sexist labels, or have inconsistent quality and lighting. The trend could help explain the growing number of cases in which facial-recognition systems have failed with troubling consequences, such as the false arrests of two Black men in the Detroit area last year.

People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. “Now we don’t care anymore. All of that has been abandoned,” she says. “You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”

Keep reading