Congress To Investigate WHO Plans To Use “Listening Surveillance Systems” To Identify “Misinformation”

If you’ve been following our reporting on the issue, you’ll already know that the new World Health Organization (WHO) pandemic prevention initiative, the Preparedness and Resilience for Emerging Threats (PRET), recommends using “social listening surveillance systems” to identify “misinformation.” But as more people are learning about how unelected bodies are being used to suppress speech and potentially override sovereignty, it’s starting to get more pushback.

According to documents from the UN agency, PRET aims to “guide countries in pandemic planning” and work to “incorporate the latest tools and approaches for shared learning and collective action established during the COVID-19 pandemic.”

The PRET document describes misinformation as a “health threat,” and refers to it as an “infodemic.”

“Infodemic is the overabundance of information – accurate or not – which makes it difficult for individuals to adopt behaviors that will protect their health and the health of their families and communities. The infodemic can directly impact health, hamper the implementation of public health countermeasures and undermine trust and social cohesiveness,” the document states.

However, it continues to recommend invasive methods of countering the spread of misinformation.

“Establish and invest in resources for social listening surveillance systems and capacities to identify concerns as well as rumors and misinformation,” the WHO wrote in the PRET document.

“To build trust, it’s important to be responsive to needs and concerns, to relay timely information, and to train leaders and HCWs in risk communications principles and encourage their application.

Keep reading

Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Keep reading

New Democrat Bill Calls For A Federal Agency To Create “Behavioral Codes,” Introduce “Disinformation Experts”

On May 18, two US senators introduced the Digital Platform Commission Act of 2023, a bill that seeks to give powers to a new federal agency that will set up a council to regulate AI, in the context of social platforms.

More precisely, the new body – the Federal Digital Platform Commission – would “rule” on what’s termed as enforceable “behavioral codes,” and among those staffing it will be “disinformation experts.”

The move by the two Democratic senators – Michael Bennet and Peter Welch – seems to have come in concert with congressional testimony delivered by OpenAI CEO Sam Altman, since the bill was presented shortly afterwards, and backs Altman’s idea to form a new federal agency of the kind.

Altman had more thoughts on how all this should work – the new agency, according to him, might be given the power to restrict AI development via licenses or credentialing.

The speed with which the two senators picked up on this to announce their bill may owe to the fact Bennet “only” had to go back and update one he already introduced in 2022. This time around, the proposed legislation has been changed in a number of ways, most notably by redefining what a digital platform is.

The bill wants this definition to also cover those companies that provide content “primarily” generated by algorithmic processes. This is done by proposing that the future Commission be given the authority over how personal information is used in decision-making or content generation, which is thought to specifically refer to tech like ChatGPT.

Keep reading

Radio Aryan: ‘Hitler fan’ James Allchurch who had ‘pro white ideology’ jailed for racist podcasts

A man who broadcast thousands of recordings containing racist, homophobic and anti-Semitic content has been jailed.

James Allchurch, who also went by the name of Sven Longshanks, from Gelli in Pembrokeshire, was found guilty of ten offences following a trial at Swansea Crown Court.

Podcaster James Allchurch, 51, made a series of episodes that were “highly racist, anti-Semitic and white supremacist in nature”, Swansea Crown Court heard.

The name Sven Longshanks is in reference to King Edward I – who was responsible for expelling Jews from England in 1290.

Prosecutor Jonathan Rees KC said Allchurch was the owner of the website, the main host and responsible for distributing the audio recordings.

Allchurch was joined by guests including National Action co-founder Alex Davies, 27, of Swansea, South Wales, who was jailed in June last year for eight and a half-years for being a member of the banned far-right organisation.

Keep reading

New DHS Training Videos Reveal How Far The Biden Administration Will Go To Eliminate Opposition

Consider the following scenario.

You meet up with your friend from college. While you went on to make a career for yourself in HR, she married right after graduation and started her family. You are disturbed by how she has changed over the last 10 years. She has doubts about the 2020 presidential election and Covid vaccines, speaks highly of the Dobbs Supreme Court decision, and refuses to drink Bud Light. She has also become quite religious, attending potlucks and Bible studies, and laments how immoral and unsafe the world has become. You later discover she expresses her dangerous views to a sizable crowd of followers on social media.

You have a few choices on how to respond:

(1) You cut off all communication with her and report her to federal authorities for spreading misinformation.

(2) You confront her about the falsehoods and try to correct her in a kind yet firm manner.

(3) You try to investigate by talking to her husband and children as well as some of her friends.

According to documents obtained by the America First Legal Foundation, agents at the Department of Homeland Security (DHS) were in the process of producing short videos for these kinds of scenarios for “bystander training.” Other videos involved “a white teenage male who hurls ‘racial epithets’ and flashes a gun at his girlfriend and a young environmental activist.” Apparently, the leaders of the DHS believe these are the types most at risk of committing acts of terror — not jihadists, radical Marxists, or demented incels who are almost always the main perpetrators of such violence.

Keep reading

St. Petersburg Uhuru members speak for first time since indictment

It’s a matter of free speech, says Omali Yeshitela, the longtime leader of the St. Petersburg-based Uhuru Movement and founder of the African People’s Socialist Party.

Yeshitela was indicted by a federal grand jury in Tampa last month and accused of working with Russian nationals to sow discord in the United States, spread pro-Russian propaganda and influence elections, along with two other members of the Uhuru Movement, Penny Joanne Hess and Jesse Nevel.

On Wednesday, the three Uhuru members spoke to the press for the first time since their indictment.

“I believe in free speech,” Yeshitela said at the news conference. “If I didn’t believe in free speech, I would never have said anything because they kill Black people for talking in this country.”

Yeshitela founded the African People’s Socialist Party in 1972. The Uhuru Movement is the party’s activist branch, started in the 1990s. The group supports reparations for Black people and has protested racism, colonialism and capitalism for decades. Hess and Nevel are the chairpersons of groups for white allies under the leadership of the African People’s Socialist Party and the Uhuru Movement, respectively.

Keep reading

It’s Been 10 Years Since the IRS’s Tea Party Scandal. Will Congress Finally Act?

A bombshell revelation came to light 10 years ago this week in 2013, when the IRS apologized for years of deliberately delaying applications for tax-exempt status from right-of-center organizations. Hundreds of groups were improperly subjected to baseless investigations, invasive and improper demands about their donors, and lengthy delays in processing routine paperwork. The IRS’s actions at the time put a severe chill on conservative speech at the height of the Tea Party movement and leading up to the 2012 presidential election.

The revelation set off years of investigations and resignations at the IRS. Yet the agency secretly continued its efforts to silence nonprofits disfavored by the agency’s bureaucrats and political appointees. In November 2013, the IRS proposed new regulations that were nearly as damaging to the First Amendment as the targeting itself.

The agency proposed severe limits on issue speech by certain nonprofits, which would have forced many nonprofits to reclassify as political action committees and publicly expose their donors’ names and home addresses. The IRS also solicited comments on potentially expanding the restrictions to cover trade associations and other groups in the future. After backlash from across the political spectrum, the proposal was withdrawn, but that victory does not change the sad fact that federal law governing nonprofits is no safer today than it was when IRS officials decided they had the authority to discriminate against groups based on their views.

Enter the American Confidence in Elections (ACE) Act, the subject of a full committee hearing tomorrow in the Committee on House Administration. The legislation addresses a broad range of election, free speech, and privacy-related issues, including remedies for the IRS’s sordid history of policing speech. The bill, first introduced in 2022 and awaiting reintroduction this Congress, would prohibit the IRS from writing new speech-chilling rules for nonprofits and codify Trump-era reforms protecting nonprofit donors against unnecessary disclosures and warehousing of their personal information.

Keep reading

Bringing the War on Terror Home to Target Americans for ‘Disinformation’

The U.S. government took the information techniques it learned after 9/11 and has turned them on Americans. America may have lost the Great War on Terror, but our technocratic elites could still win their war against American liberty.

That’s the argument made by Jacob Siegel in a 13,000-word Tablet magazine article titled “A Guide to Understanding the Hoax of the Century,” which seeks to explain “a high-level hoax perpetrated against the American people” by elitists “who believe themselves to be infallible.”

Specifically, Siegel writes, these “infallible” elitists believe they are saving the world from “disinformation,” which is whatever they view as untruths about Russia, Ukraine, Donald Trump, Covid, climate change, election fraud, Brexit, etc. You name a flavor of disinfo, and they want to save us from it. And they’re operating in the State Department and other federal agencies, in numerous foundations and NGOs, and at a hundred academic “centers” that have sprung up like ‘shrooms since 2016.

Keep reading

FDA Commissioner Dr. Robert Califf says “regulation” is needed to target “misinformation”

In an interview with CNBC, FDA Commissioner Dr. Robert Califf said that online misinformation was harming the life expectancy of people, there is a need for “better regulation” on how to handle health misinformation and that “specific authorities at FDA, FTC, and other areas are going to be needed.”

“We know more and more about misinformation. It relates back to this life expectancy,” Califf said. Why aren’t we using knowledge of diet? It’s not that people don’t know about it. Why aren’t we using medical products as effectively and efficiently as our peer countries? A lot of it has to do with choices that people make because of the things that influence their thinking. The COVID vaccines and the antivirals give us an easy way to talk about it, but this is not limited to those areas. In heart disease, so many people don’t take their medicines, even though they’re now generic and very low-cost, often [they’re] deluded into taking things that are sold over the Internet that aren’t effective.”

According to the FDA commissioner, one of the solutions is telling the “truth is a louder volume.”

“In the good old days, when I was a practicing cardiologist, for the most part, people developed products, they got through the FDA, the label determined what was talked about, the Internet didn’t exist, you advertised in medical meetings and journals. There was sort of a hierarchy of information that went through the prescriber or the implanter in the case of devices to the patient. Of course, the problem in that system is it left a lot of people out. We now know about that. Now, everyone’s included because everyone’s connected to the Internet. But we can put out a statement about what we’ve determined based on the highest level of evidence, within ten minutes, someone who’s thought ten minutes about it can reach a billion people. And there’s nothing that restricts them from telling things that are not true. This has always existed. … But they couldn’t reach so many people,” he explained.

He added that there isn’t enough regulation on health information and that is “impacting our health in very detrimental ways.” As such, he thinks “there is a real need for better regulation of how to deal with this complex information.”

Califf noted that the FDA already has regulatory authority over advertisements content on tech platforms. But he feels the agency could do it better.

Keep reading

Federal Reserve head is questioned on controversial “debanking of disfavored individuals”

During a recent House Committee hearing, Chair of the Federal Reserve Jerome Powell was grilled on “Operation Choke Point 2.0” — an alleged Biden administration effort that Rep. Warren Davidson (R-OH) described as being “particularly focused on debanking people that are disfavored by…the current…executive branch.”

Rep. Warren Davidson (R-OH) told Powell that he’d spoken with multiple bankers who said “they’ve never seen a higher degree of regulatory burden, steering guidance, shaping activities in the market from regulators.”

He attributed this heightened scrutiny to Operation Choke Point 2.0 — a reference to an alleged extension of Operation Choke Point 1.0. The first Operation Choke Point was an Obama-era debanking effort that began in 2013 and attempted to prevent gun dealers, payday lenders, and other companies that were deemed to be “high risk” from accessing banking services. Some people in the cryptocurrency industry claim that Operation Choke Point 2.0 is now being carried out by the Biden adminsitration and is primarily focused on deterring banks from doing business with cryptocurrency firms.

“When people really feel like some third party is going to steer or shape their money, they don’t trust it,” Davidson added. “I mean the unbanked and the underbanked fundamentally that’s lack of trust is part of why they don’t use our banking system today. In fact, that’s part of the appeal of the digital asset space…the permissionless nature of it.”

Davidson continued by suggesting that lots of people working in the financial services space “feel threatened by the prospect of change” and are attempting to restrict access to services such as cryptocurrency.

“They’ve maybe reluctantly concluded that you can’t ban crypto,” Davidson said. “They at least want to keep it account based so some third party can actually control the assets which is a polite way of saying, ‘We don’t actually trust our citizens to control their money or their assets, we’ll let somebody else do it for them because we can control those third parties.’”

Davidson then pressed Powell on whether financial regulators use their powers to control third parties.

“If you don’t comply with the regulatory regime, you don’t get to operate a financial services business, right?” Davidson asked Powell.

“That’s right,” Powell confirmed.

Keep reading