FDA Commissioner Dr. Robert Califf says “regulation” is needed to target “misinformation”

In an interview with CNBC, FDA Commissioner Dr. Robert Califf said that online misinformation was harming the life expectancy of people, there is a need for “better regulation” on how to handle health misinformation and that “specific authorities at FDA, FTC, and other areas are going to be needed.”

“We know more and more about misinformation. It relates back to this life expectancy,” Califf said. Why aren’t we using knowledge of diet? It’s not that people don’t know about it. Why aren’t we using medical products as effectively and efficiently as our peer countries? A lot of it has to do with choices that people make because of the things that influence their thinking. The COVID vaccines and the antivirals give us an easy way to talk about it, but this is not limited to those areas. In heart disease, so many people don’t take their medicines, even though they’re now generic and very low-cost, often [they’re] deluded into taking things that are sold over the Internet that aren’t effective.”

According to the FDA commissioner, one of the solutions is telling the “truth is a louder volume.”

“In the good old days, when I was a practicing cardiologist, for the most part, people developed products, they got through the FDA, the label determined what was talked about, the Internet didn’t exist, you advertised in medical meetings and journals. There was sort of a hierarchy of information that went through the prescriber or the implanter in the case of devices to the patient. Of course, the problem in that system is it left a lot of people out. We now know about that. Now, everyone’s included because everyone’s connected to the Internet. But we can put out a statement about what we’ve determined based on the highest level of evidence, within ten minutes, someone who’s thought ten minutes about it can reach a billion people. And there’s nothing that restricts them from telling things that are not true. This has always existed. … But they couldn’t reach so many people,” he explained.

He added that there isn’t enough regulation on health information and that is “impacting our health in very detrimental ways.” As such, he thinks “there is a real need for better regulation of how to deal with this complex information.”

Califf noted that the FDA already has regulatory authority over advertisements content on tech platforms. But he feels the agency could do it better.

Keep reading

Federal Reserve head is questioned on controversial “debanking of disfavored individuals”

During a recent House Committee hearing, Chair of the Federal Reserve Jerome Powell was grilled on “Operation Choke Point 2.0” — an alleged Biden administration effort that Rep. Warren Davidson (R-OH) described as being “particularly focused on debanking people that are disfavored by…the current…executive branch.”

Rep. Warren Davidson (R-OH) told Powell that he’d spoken with multiple bankers who said “they’ve never seen a higher degree of regulatory burden, steering guidance, shaping activities in the market from regulators.”

He attributed this heightened scrutiny to Operation Choke Point 2.0 — a reference to an alleged extension of Operation Choke Point 1.0. The first Operation Choke Point was an Obama-era debanking effort that began in 2013 and attempted to prevent gun dealers, payday lenders, and other companies that were deemed to be “high risk” from accessing banking services. Some people in the cryptocurrency industry claim that Operation Choke Point 2.0 is now being carried out by the Biden adminsitration and is primarily focused on deterring banks from doing business with cryptocurrency firms.

“When people really feel like some third party is going to steer or shape their money, they don’t trust it,” Davidson added. “I mean the unbanked and the underbanked fundamentally that’s lack of trust is part of why they don’t use our banking system today. In fact, that’s part of the appeal of the digital asset space…the permissionless nature of it.”

Davidson continued by suggesting that lots of people working in the financial services space “feel threatened by the prospect of change” and are attempting to restrict access to services such as cryptocurrency.

“They’ve maybe reluctantly concluded that you can’t ban crypto,” Davidson said. “They at least want to keep it account based so some third party can actually control the assets which is a polite way of saying, ‘We don’t actually trust our citizens to control their money or their assets, we’ll let somebody else do it for them because we can control those third parties.’”

Davidson then pressed Powell on whether financial regulators use their powers to control third parties.

“If you don’t comply with the regulatory regime, you don’t get to operate a financial services business, right?” Davidson asked Powell.

“That’s right,” Powell confirmed.

Keep reading

Feds give professors $5.7M to develop tool to combat ‘misinformation’

A group of professors is using taxpayer dollars doled out by the federal government to develop a new misinformation fact-checking tool called “Course Correct.”

National Science Foundation funding, awarded through a pair of grants from 2021 and 2022, has amounted to more than $5.7 million for the development of this tool, which, according to the grant abstracts, is intended to aid reporters, public health organizations, election administration officials, and others to address so-called misinformation on topics such as U.S. elections and COVID-19 vaccine hesitancy.

This $5.7 million in grant money is on top of nearly another $200,000 awarded in 2020 through a Coronavirus Aid, Relief and Economic Security Act-funded NSF grant for a project focused in part on mental health that Course Correct is said to have grown out of.

According to the abstract of the 2021 grant, Course Correct’s developers, a group of five professors from various institutions nationwide, are using techniques related to machine learning and natural language processing to identify social media posts pertaining to electoral skepticism and vaccine hesitancy, identify people likely to be exposed to misinformation in the future, and flag at-risk online communities for intervention.

Keep reading

New Law Sought by Brazil’s Lula to Ban and Punish “Fake News and Disinformation” Threatens the Free Internet Everywhere

A major escalation in official online censorship regimes is progressing rapidly in Brazil, with implications for everyone in the democratic world. Under Brazil’s new government headed by President Lula da Silva, the country is poised to become the first in the democratic world to implement a law censoring and banning, and punishing not only “fake news” and “disinformation” online, but also punishing those deemed guilty of spreading it. Such laws already exist throughout the non-democratic world, adopted years ago by the planet’s most tyrannical regimes in Saudi Arabia, Egypt, Qatar, the United Arab Emirates and Turkey. 

If one wishes to be generous with the phrase “the democratic world” and include Malaysia and Singapore – at best hybrid “democracies” – then one could argue that a couple other “democratic” governments have already seized the power to decree Absolute Truth and then ban any deviation from it. But absent unexpected opposition, Brazil will soon become the first country unambiguously included in the democratic world to outlaw “fake news” and vest government officials with the power to banish it and punish its authors. 

Last May, the U.S. Department of Homeland Security was forced to retreat from its attempt to appoint a “disinformation czar” to oversee what would effectively be its Ministry of Truth. That new DHS agency, at least nominally, was to be only advisory: it would declare truth and falsity and then pressure online platforms to comply by banning that which was deemed false. The backlash was so great that DHS finally claimed to cancel it, though secret documents emerged in October describing the agency’s plans to continue to shape online censorship decisions of Big Tech. 

Brazil’s law would be anything but advisory. Though the details are still yet to be released, it would empower law enforcement officials to take action against citizens deemed to be publishing statements that the government classifies as “false,” and to solicit courts to impose punishment on those who do so.

Keep reading

Twitter Files: Sen. Angus King Targeted ‘Suspicious’ Americans for Blacklisting

According to the latest drop of the Twitter Files, Sen. Angus King (I-ME) flagged accounts his office disliked to the social media platform, accusing Americans of being “suspicious” for reasons including being excited by a Sen. Rand Paul visit, mentioning immigration in their tweets, or being followed by a political rival.

Twitter users have been sharing their ideas, opinions, and thoughts on the platform for a long time. But in recent years, the government’s role in policing this content has come under scrutiny. An intricate system of government involvement in Twitter moderation has been exposed by the Twitter Files, with journalist Matt Taibbi compiling a collection of thousands of moderation requests.

The Twitter Files have revealed a number of details about the internal workings of the social media platform in recent months. According to the latest batch released over the weekend, it has been discovered that government officials frequently misidentify Americans as fictitious Russians. Further complicating the role of governments in online content moderation is the discovery that Twitter has given the “U.S. intelligence community,” moderation authority.

Keep reading

Police told to keep record of ‘conspiracy theorists’ and ‘communists’

The Queensland police are required to report interactions with people who “allegedly hold a range of ideological beliefs” including “conspiracy theorists” according to an email sent to all members of the Queensland Police Service on Thursday and seen by ABC News.

Police must record interactions with such individuals in the state’s QPrime database “at the first available opportunity,” the email states, explaining that in addition to “conspiracy theorists,” “religious, social or political extremists and sovereign citizens, as well as people with ideologies relating to capitalism, communism, socialism or Marxism” also fit the bill.

Officers were also told to report “all matters that indicate concerning or escalating behavior due to ideological beliefs, including religious and single-issue ideologies” and advised of the “risks” inherent in dealing with these groups.

The guidance comes after a deadly shooting in Wieambilla in December left two officers and a civilian dead, in addition to the three shooters. Nathaniel, Gareth and Stacey Train reportedly gunned down constables Rachel McCrow and Matthew Arnold and injured two other officers on their property, also killing neighbor Alan Dare when he was drawn to the scene by the noise. The Trains were then killed by specialist police following an hours-long standoff.  

Keep reading

The New York City Department of Health created “Misinformation Response Unit” to monitor social media

The New York City Department of Health and Mental Hygiene set up a “Misinformation Response Unit” to monitor what it would determine to be “dangerous misinformation” posted on social media, non-US sites, and non-English media in the US.

This “misinformation” mostly had to do with Covid vaccination – the Department was determined to drive vaccination rates up by spreading its word, and in this gathered over 100 partners whose job was to craft positive messaging around the controversial subject.

Among those the dedicated new unit is working with is Public Good Projects, otherwise known for receiving funding from a lobbying group representing two major Covid vaccine manufacturers, Pfizer and Moderna.

Their “good” work here also included sending Twitter, on a weekly basis, lists of posts slated for censorship.

In an article published by the NEJM Catalyst journal, those behind the effort are now assessing the Unit’s work as successful, what with it being able to “rapidly identify messages” deemed as containing inaccurate information about the virus, vaccines, treatment, etc.

And although admitting that “vaccine hesitancy” remains high around the world even two years after the vaccines were first introduced – and this is something attributed to “disinformation and misinformation” and continues to worry the World Health Organization (WHO) and the US Surgeon General, as well as “medical experts” – the New York City Health Department thinks that it did well in getting its own narrative out, particularly in traditional media.

Keep reading

Disinformation Inc: Meet the groups hauling in cash to secretly blacklist conservative news

Well-funded “disinformation” tracking groups are part of a stealth operation blacklisting and trying to defund conservative media, likely costing the news companies large sums in advertising dollars, a Washington Examiner investigation found.

Major ad companies are increasingly seeking guidance from purportedly “nonpartisan” groups claiming to be detecting and fighting online “disinformation.” These same “disinformation” monitors are compiling secret website blacklists and feeding them to ad companies, with the aim of defunding and shutting down disfavored speech, according to sources familiar with the situation, public memos, and emails obtained by the Washington Examiner.

Brands, which have been seeking to promote products online through multiple websites to expand their digital footprint, are turning to corporate digital ad companies keyed into global markets. In turn, some of these companies are contracting “disinformation” trackers to obtain private information about which websites they should purportedly “defund.”

The Global Disinformation Index, a British group with two affiliated U.S. nonprofit groups sharing similar board members, is one entity shaping the ad world behind the scenes. GDI’s CEO is Clare Melford, former senior vice president for MTV Networks, and its executive director is Daniel Rogers, a tech advisory board member for Human Rights First, a left-leaning nonprofit group that says disinformation fuels “violent extremism and public health crises.”

“It’s devastating,” Mike Benz, the State Department’s ex-deputy assistant for internal communications and information policy, told the Washington Examiner. “The implementation of ad revenue crushing sentinels like Newsguard, Global Disinformation Index, and the like has completely crippled the potential of alternative news sources to compete on an even economic playing field with approved media outlets like CNN and the New York Times.”

Keep reading

Feds Adapting A.I. Used to Track ISIS to Combat American Dissent on Vaccines, Elections

The government’s campaign to fight “misinformation” has expanded to adapt military-grade artificial intelligence once used to silence the Islamic State (ISIS) to quickly identify and censor American dissent on issues like vaccine safety and election integrity, according to grant documents and cyber experts.

The National Science Foundation (NSF) has awarded several million dollars in grants recently to universities and private firms to develop tools eerily similar to those developed in 2011 by the Defense Advanced Research Projects Agency (DARPA) in its Social Media in Strategic Communication (SMISC) program.

DARPA said those tools were used “to help identify misinformation or deception campaigns and counter them with truthful information,” beginning with the Arab Spring uprisings in the the Middle East that spawned ISIS over a decade ago.

The initial idea was to track dissidents who were interested in toppling U.S.-friendly regimes or to follow any potentially radical threats by examining political posts on Big Tech platforms.

DARPA set four specific goals for the program:

  1. Detect, classify, measure and track the (a) formation, development and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation.
  2. Recognize persuasion campaign structures and influence operations across social media sites and communities.
  3. Identify participants and intent, and measure effects of persuasion campaigns.
  4. Counter messaging of detected adversary influence operations.

Keep reading