Facebook Bans Ads Questioning Safety Of COVID-19 Vaccines

The company has also launched salvos against QAnon and election-related misinformation, while taking an aggressive approach toward political advertising, and political content in general.

And as global authorities struggle to convince the public that an eventual COVID-19 vaccine will be safe to take despite the expedited approval process, Facebook has decided to give them a hand by banning all content encouraging users to refuse to take a vaccine. It laid out the new global policy in a blog post published Tuesday.

“Now, if an ad explicitly discourages someone from getting a vaccine, we’ll reject it,” the company’s Head of Health Kang-Xing Jin and Director of Product Management Rob Leathern said in a blog post Tuesday.

Facebook will draw the line at allowing users who advocate against “mandatory vaccination,” which the company said was a legitimate political position (not an argument made in “bad faith” that some on the left insist), to post as normal. They cited an example of a state lawmaker from Virginia who posted “STOP FORCED CORONAVIRUS VACCINATIONS”.

Keep reading

Facebook bans Holocaust denial content

Facebook has announced it will remove all content on its platform that “denies or distorts the Holocaust.” The company says this expansion of its hate speech policies is a response to what it calls “the well-documented rise in anti-Semitism globally and the alarming level of ignorance about the Holocaust, especially among young people.” Facebook has previously faced strong criticism for letting Holocaust denial content spread freely on its platform.

In addition to removing content that denies or distorts the Holocaust, the company says that, starting later this year, it will direct anyone searching on Facebook for terms related to this topic to “credible information” supplied by third-party sources.

“Enforcement of these policies cannot happen overnight. There is a range of content that can violate these policies, and it will take some time to train our reviewers and systems on enforcement,” said Facebook’s VP of content policy, Monika Bickert, in a blog post.

Earlier this year, Facebook said it would ban anti-Semitic stereotypes that depicts Jewish people as “running the world or its major institutions.” But a report a week later by a UK counter-extremism group, the Institute for Strategic Dialogue (ISD), found that the company’s algorithm “actively promotes” Holocaust denial content.

Removing content that denies or distorts the Holocaust may seem like an obvious decision for a company that is frequently accused of enabling hate speech. But in the past, Facebook CEO Mark Zuckerberg, who is Jewish, has presented the company’s tolerance of Holocaust denial as an example of its commitment to principles of free speech.

In an interview with Recode in 2018, Zuckerberg said that Facebook wouldn’t remove content from Holocaust deniers because he believed these individuals weren’t “intentionally getting it [the Holocaust] wrong.”

“It’s hard to impugn intent and to understand the intent,” said Zuckerberg. “I just think, as abhorrent as some of those examples are, I think the reality is also that I get things wrong when I speak publicly. I’m sure you do. I’m sure a lot of leaders and public figures we respect do too, and I just don’t think that it is the right thing to say, ‘We’re going to take someone off the platform if they get things wrong, even multiple times.’” (Zuckerberg later added: “I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that.”)

In a Facebook post today, Zuckerberg said his thinking on the matter had “evolved,” in part in response to a climate of “rising anti-Semitism.”

Keep reading

Who at Facebook Is Deleting Donald Trump’s Posts?

Facebook declined to answer a series of specific questions from Motherboard about how it polices Trump’s content, and why it took so long to delete this post. In an email, Facebook spokesperson Andrea Vallone said that the company has “teams around the globe that deal with content questions, so coverage across all time zones, and we have our Elections Operations Center in the U.S. up and running. The president is treated the same as other politicians.”

Keep reading

Reporters Claim Facebook is Censoring Information on Julian Assange Case

Craig Murray, the former British ambassador to Uzbekistan and longtime confidant of Julian Assange, has been fastidiously reporting on the Australian publisher’s extradition hearing to the United States. Yet few people have been reading it. This, according to Murray, is because of a deliberate decision by online media giants to downplay or suppress discussion of the case. On his blog, Murray wrote that he usually receives around 50 percent of his readers from Twitter and 40 percent from Facebook links, but that has dropped to 3 percent and 9 percent, respectively during the hearing. While the February hearings sent around 200,000 readers to his site daily, now that figure is only 3,000.

To be plain that is very much less than my normal daily traffic from them just in ordinary times. It is the insidious nature of this censorship that is especially sinister – people believe they have successfully shared my articles on Twitter and Facebook, while those corporations hide from them that in fact it went into nobody’s timeline,” he added.

Asked about the situation by former New York Times reporter Chris Hedges, Murray explained that

Anybody who is at all radical or takes any view of anything that is outwith the official establishment view gets used to occasional shadow banning, but I have never seen anything on this scale before.”

“90% of my traffic has just been cut off by what seems to be a general algorithm command of some kind to downplay Assange,” he added. “I think it is as simple as that.”

Keep reading

Facebook Will Restrict Certain Users If US Election Gets “Extremely Chaotic Or Violent”

Earlier this week, Facebook gave us a welcome break from the virtue-signaling by threatened to pull its business from Europe should courts uphold an EU-wide ban on transfering European user data to US-based servers (something Washington is desperately trying to stop TikTok from doing, in a sense).

But that didn’t last long. On Tuesday, the social media giant’s head of global communications, former deputy PM Nick Clegg, told the Financial Times that the company is developing contingency plans should the US election lead to an outbreak of chaos and uncertainty. Though he didn’t go into too much detail, the implication is clear: Facebook is planning to significantly curtail speech on its platform, echoing the Internet blackouts utilized by authoritarian regimes including Iran, Venezuela and elsewhere.

Clegg preferred to call them the “break-the-glass” options, and assured readers that they probably wouldn’t happen anyway.

In an interview with the Financial Times, Nick Clegg, the company’s head of global affairs, said it had drawn up plans for how to handle a range of outcomes, including widespread civic unrest or “the political dilemmas” of having in-person votes counted more rapidly than mail-in ballots, which will play a larger role in this election due to the coronavirus pandemic. “There are some break-glass options available to us if there really is an extremely chaotic and, worse still, violent set of circumstances,” Mr Clegg said, though he stopped short of elaborating further on what measures were on the table. The proposed actions, which would probably go further than any previously taken by a US platform, come as the social media group is under increasing pressure to lay out how it plans to combat election-related misinformation, voter suppression and the incitement of violence on the November 3 election day and during the post-election period.

Of course, post-election day indecision is nothing new in American politics, though it will be the first time we’ve seen one since Facebook was founded in 2004. It also comes – as the FT none-too-subtly points out – as “conerns mount that even US president Donald Trump himself could take to social media to contest the result or call for violent protest, potentially triggering a constitutional crisis.”

But don’t worry: Because as Clegg explains, Facebook has done this before in “other parts of the world.”

“We have acted aggressively in other parts of the world where we think that there is real civic instability and we obviously have the tools to do that [again],” Mr Clegg added, citing the previous use of “pretty exceptional measures to significantly restrict the circulation of content on our platform”.

Facebook has also taken several steps to immediately step up and address any harmful activity that might emerge on its platform during the election. Citing unnamed sources, the FT says Facebook has planned for more than 70 scenarios, and that any high-stakes decisions will fall to a team of executives including CEO Mark Zuckerberg and COO Sheryl Sandberg. The company is employing a range of experts, including military planners, to help the company’s leadership make the best decisions possible.

“We’ve slightly reorganised things such that we have a fairly tight arrangement by which decisions are taken at different levels [depending on] the gravity of the controversy attached,” Mr Clegg said. The executive also said that “the amount of resources we are throwing at this is very considerable”. Facebook will have a virtual war room – dubbed its “Election Operations Centre” – for monitoring for suspicious activity and updating its “voter information hub”, which will showcase verified results to users, he said.

Keep reading

Facebook says it’ll REMOVE all posts alleging Oregon fires ‘were started by certain groups’

Facebook has announced that it will be wiping messages alleging that wildfires, which have been sweeping through Oregon, might be the work of certain groups after the FBI brushed off reports of arson as “conspiracy theories.”

“We are removing false claims that the wildfires in Oregon were started by certain groups,” Andy Stone, policy communications manager at Facebook, tweeted late on Saturday.

Defending what is effectively an act of censorship by the social media giant, Stone noted that speculation suggesting the blazes have been ignited by extremists are forcing law enforcement to “divert resources from fighting the fires and protecting the public.”

This is consistent with our past efforts to remove content that could lead to imminent harm given the possible risk to human life as the fires rage on

The move comes as multiple wildfires are tearing through Oregon, closing in on Portland, the center of the Black Lives Matter protests, which have been marred by vandalism and arson.

Keep reading

Facebook Declares Kyle Rittenhouse’s Actions ‘Mass Murder,’ Won’t Allow Posts in Support

Facebook is officially designating the actions of Kyle Rittenhouse, the 17-year-old who allegedly killed two people and wounded another in Kenosha, Wisconsin, after being mobbed and attacked by rioters, a “mass murder,” despite the fact that this has not been proven in court. Facebook confirmed that it would remove any posts that support Rittenhouse.

Conservative author and YouTube personality Mark Dice reported that his Facebook page had received a threat of suspension after he posted a video of Rittenhouse offering to give medical assistance to wounded protesters in Kenosha shortly before the shooting.

“Facebook just removed a video I posted showing Kyle Rittenhouse offering help to a wounded protester in Kenosha,” said Dice, who also posted the video on Twitter.

Facebook, he said, is now “threatening to delete my entire account.”

In a comment to Breitbart News, a Facebook spokesman confirmed that the company is removing posts “in support” of Rittenhouse, because the incident is considered a “mass murder” by the company.

“We’ve designated the shooting in Kenosha a mass murder and are removing posts in support of the shooter, including this one,” said the spokesman.

Keep reading

Facebook’s purge of left-wing radicals

One of the strangest claims made today about free speech is that it is a ‘right-wing’ talking point. It’s an excuse to spout racism, sexism, homophobia, transphobia, Islamophobia – you pick the prejudice. Some even claim that there is no crisis of free speech, and anyone who professes their support for this most liberal of values is essentially a useful idiot for the far right.

But Facebook’s latest crackdown makes clear that it is not only the right whose free speech is under threat. Earlier this month, Facebook announced an extension of its ‘Dangerous Individuals and Organisations’ policy. This was the policy it used to ban right-wing controversialists like Alex Jones, Milo Yiannopoulos and Paul Joseph Watson – alongside anti-Semite Louis Farrakhan – who were all accused of promoting ‘violence and hate’.

Now, in the wake of the Black Lives Matter protests-cum-riots, Facebook has decided to cast its net of censorship wider. ‘We have seen growing movements that, while not directly organising violence, have celebrated violent acts, shown that they have weapons and suggest they will use them, or have individual followers with patterns of violent behaviour’, Facebook said in a recent statement.

This has led to Facebook removing 790 groups, 100 pages and 1,500 ads tied to the crankish, right-wing QAnon conspiracy theory. But it has also led to a purge of so-called ‘Antifa’ groups and other far-left organisations. ‘For militia organisations and those encouraging riots… we have initially removed over 980 groups, 520 pages and 160 ads from Facebook. We have also restricted over 1,400 hashtags related to these groups and organisations on Instagram’, the statement continues.

This is not the first time Antifa groups have been censored by Big Tech. Back in 2018, PayPal started refusing to process payments which were connected to a number of prominent Antifa organisations at the same time as it banned Tommy Robinson, Alex Jones and the Proud Boys.

Keep reading

Facebook Introducing a ‘Virality Circuit Breaker’ to Prevent Spread of Viral Content

According to a recent report from the Interface newsletter by Verge reporter Casey Newton, Facebook is piloting a new program that will monitor viral posts that gain millions of views to ensure that they don’t violate community standards.

The Verge reporter Casey Newton reports in the newsletter the Interface that social media giant Facebook is developing a new program to fight viral misinformation on the platform. In the newsletter, Newton lists suggestions made by the Center for American Progress (CAP) to prevent the spread of certain content on the platform.

Keep reading

Facebook Fails to Fact Check Fake News Claiming Cannon Hinnant’s Father Did Drugs With Alleged Killer

After the Wilson Police Department confirmed the prevalence of a doctored press release–fraught with typos, spelling mistakes, and grammatical errors–to swing the narrative away from Cannon Hinnant’s coldblooded murder, another meme attempting to paint the murdered child’s parents in a bad light has surfaced.

The unsubstantiated meme which alleges that Cannon’s father, Austin Hinnant, was a meth dealer responsible for inducing a psychotic state of mind for Cannon’s alleged murderer, Darius Sessoms.

Sessoms is alleged to have shot the five-year-old in the head at point blank range for riding his bike on Sessom’s front yard.

Those in defense of Sessoms’ unprovoked crime claim that he is being singled out for his skin color; with some dubbing Sessoms as a “political prisoner” and other accusing Hinnant’s family of racism.

Keep reading