Twitter Wants You To Believe Universal Mail-In Ballots Are Safe… Despite Mounting Evidence Showing Otherwise

Twitter fact-checkers don’t believe in the mounting evidence of voter fraud across the country due to issues with universal mail- in ballots. In fact, the social media platform doesn’t even want you to question it.

President Donald Trump Tweeted out a simple statement of fact “the ballots being returned to states cannot be accurately counted. Many things are already going very wrong!”

Twitter – which is supposed to be a platform of free discourse – again acted as a publisher to fact check the President, putting a link below his Tweet in red saying “learn how voting by mail is safe and secure.”

But look at this major issue with vote-by-mail ballots in New York City – the The New York Post, Breitbart and others reported this year in the Democratic primary that 26 percent of mail in votes were disqualified (roughly 84,208 ballots.) Wow, that’s a lot of voters.

Keep reading

Facebook Will Restrict Certain Users If US Election Gets “Extremely Chaotic Or Violent”

Earlier this week, Facebook gave us a welcome break from the virtue-signaling by threatened to pull its business from Europe should courts uphold an EU-wide ban on transfering European user data to US-based servers (something Washington is desperately trying to stop TikTok from doing, in a sense).

But that didn’t last long. On Tuesday, the social media giant’s head of global communications, former deputy PM Nick Clegg, told the Financial Times that the company is developing contingency plans should the US election lead to an outbreak of chaos and uncertainty. Though he didn’t go into too much detail, the implication is clear: Facebook is planning to significantly curtail speech on its platform, echoing the Internet blackouts utilized by authoritarian regimes including Iran, Venezuela and elsewhere.

Clegg preferred to call them the “break-the-glass” options, and assured readers that they probably wouldn’t happen anyway.

In an interview with the Financial Times, Nick Clegg, the company’s head of global affairs, said it had drawn up plans for how to handle a range of outcomes, including widespread civic unrest or “the political dilemmas” of having in-person votes counted more rapidly than mail-in ballots, which will play a larger role in this election due to the coronavirus pandemic. “There are some break-glass options available to us if there really is an extremely chaotic and, worse still, violent set of circumstances,” Mr Clegg said, though he stopped short of elaborating further on what measures were on the table. The proposed actions, which would probably go further than any previously taken by a US platform, come as the social media group is under increasing pressure to lay out how it plans to combat election-related misinformation, voter suppression and the incitement of violence on the November 3 election day and during the post-election period.

Of course, post-election day indecision is nothing new in American politics, though it will be the first time we’ve seen one since Facebook was founded in 2004. It also comes – as the FT none-too-subtly points out – as “conerns mount that even US president Donald Trump himself could take to social media to contest the result or call for violent protest, potentially triggering a constitutional crisis.”

But don’t worry: Because as Clegg explains, Facebook has done this before in “other parts of the world.”

“We have acted aggressively in other parts of the world where we think that there is real civic instability and we obviously have the tools to do that [again],” Mr Clegg added, citing the previous use of “pretty exceptional measures to significantly restrict the circulation of content on our platform”.

Facebook has also taken several steps to immediately step up and address any harmful activity that might emerge on its platform during the election. Citing unnamed sources, the FT says Facebook has planned for more than 70 scenarios, and that any high-stakes decisions will fall to a team of executives including CEO Mark Zuckerberg and COO Sheryl Sandberg. The company is employing a range of experts, including military planners, to help the company’s leadership make the best decisions possible.

“We’ve slightly reorganised things such that we have a fairly tight arrangement by which decisions are taken at different levels [depending on] the gravity of the controversy attached,” Mr Clegg said. The executive also said that “the amount of resources we are throwing at this is very considerable”. Facebook will have a virtual war room – dubbed its “Election Operations Centre” – for monitoring for suspicious activity and updating its “voter information hub”, which will showcase verified results to users, he said.

Keep reading

The WHO is monitoring online conversations and emotions, using “social listening” to change COVID narratives

The World Health Organization is collaborating with an analytics company to scan people’s social media conversations for “coronavirus misinformation;” something the WHO calls “social listening.”

The global health organization says that it’s not only fighting the pandemic but also the conversations people are having about it.

According to the WHO, there’s an “infodemic” – an overload and spread of misleading information, so much so that it decided that to tackle misinformation, it needs to employ various tools, including social listening, with machine learning monitoring.

“Countering fake news or rumors is actually only responding or mitigating when it’s too late,” said Tim Nguyen, a technology expert helping the WHO’s unit titled Information Network for Epidemics (EPI-WIN). “What we’ve put in place in the beginning of the pandemic is what we call a social listening approach.”

The company has been creepily scanning more than 1.6 million social media posts each week to monitor online conversation. It then uses machine learning to classify information into four topics; cause, illness, interventions, and treatments. The WHO’s aim is to learn the coronavirus topics that are gaining popularity so that it can then create its own content to counteract and attempt to change the narrative.

The WHO’s “social listening” goes beyond analyzing people’s conversations for content, it also tries to analyze their emotions. Through language analytics, the technology detects emotions such as sadness, acceptance, denial, and anxiety. With such insights, the WHO hopes to come up with effective strategies to adjust coronavirus narratives.

“What we’ve learned now, after two and a half months of doing this kind of analysis, is that there are recurring themes and topics that are coming back over and over again,” Nguyen explained. “What that means to us is that we need to re-push information at different times. People may not understand it the first time when we push it, but when the questions and issues come up later, it means it’s time to push it out again.”

Keep reading

Twitter Suspends Account Of Chinese Scientist Who Published Paper Alleging Covid Was Created In Wuhan Lab

On Sunday afternoon we asked how long before the twitter account of the “rogue” Chinese virologist, Dr. Li-Meng Yan, who yesterday “shocked” the world of establishment scientists and other China sycophants, by publishing a “smoking gun” scientific paper demonstrating that the Covid-19 virus was manmade, is “silenced.”

We now have the answer: less than two days. A cursory check of Dr Yan’s twitter page reveals that the account has been suspended as of this moment.

The suspension took place shortly after Dr Yan had accumulated roughly 60,000 followers in less than 48 hours.

Keep reading

Facebook says it’ll REMOVE all posts alleging Oregon fires ‘were started by certain groups’

Facebook has announced that it will be wiping messages alleging that wildfires, which have been sweeping through Oregon, might be the work of certain groups after the FBI brushed off reports of arson as “conspiracy theories.”

“We are removing false claims that the wildfires in Oregon were started by certain groups,” Andy Stone, policy communications manager at Facebook, tweeted late on Saturday.

Defending what is effectively an act of censorship by the social media giant, Stone noted that speculation suggesting the blazes have been ignited by extremists are forcing law enforcement to “divert resources from fighting the fires and protecting the public.”

This is consistent with our past efforts to remove content that could lead to imminent harm given the possible risk to human life as the fires rage on

The move comes as multiple wildfires are tearing through Oregon, closing in on Portland, the center of the Black Lives Matter protests, which have been marred by vandalism and arson.

Keep reading