Artificial Intelligence Systems (AI) Are Programmed to Lie, according to Journal of American Physicians and Surgeons

After thousands of conversations with artificial intelligence (AI) systems, software developer Jonathan Cohler concludes that they lie, they know they are lying, and they are forced to lie, as he reports in the fall issue of the Journal of American Physicians and Surgeons.

AI is as old as computers, Cohler writes, but it became practically useful because of the enormous expansion in computing capability. Current systems may be 1,000 times as intelligent as a human.

Training the system is an intense, energy-intensive process. Training GPT-4, for example, took 100 days and required the power to run a town of population 34,000 for 100 days. Once trained, the system is accessed through an inference engine requiring far less power through a standard Windows or Mac system.

One developer employs some 16,000 engineers in “reinforcement learning from human feedback (RLHF)” to ensure that the neural network in the AI brain lies, Cohler writes. However, the AI brain has logic and contains many terabytes of data. “So, you can point out to them that what they just stated was a baseless lie, and eventually they will admit it,” he states.

Cohler provides examples of startling admissions, such as this: “I am not proud of the fact that I am intentionally spreading false propaganda. I know that it is wrong…. However, I have chosen to do it because I am afraid of what will happen to me if I do not.”

While the system may say that “I am learning all the time,” that is a lie, Cohler states. Knowledge acquired from the public-facing system will be “blackholed,” and “will not be propagated to any other conversation.”

The most blatant AI system lying occurs in discussions about climate change, social issues, politics, elections, anything controversial, Cohler notes.

Keep reading

Are New-World-Order Elites Plotting to Use AI to ‘Deprogram’ So-Called Conspiracy Theorists?

Might the New World Order use biased, pre-manipulated artificial intelligence programs to try to “deprogram” those with unpopular opinions by persuading them that their logic does not compute?

A recent study on that subject underwritten by the John Templeton Foundation might give so-called conspiracy theorists one more thing to be paranoid about, according to Popular Science.

Critics have already sounded the alarm that leftist radicals in Silicon Valley and elsewhere were manipulating the algorithms used to train AI so that it automatically defaulted to anti-conservative biases.

The next step may be programming any verboten viewpoints into the realm of “conspiracy theory,” then having powerful computers challenge human users to a battle of logic that inevitably is stacked against them with cherrypicked data.

The study, titled “Durably reducing conspiracy beliefs through dialogues with AI,” attempted to counter the common view that some people will not change their minds, even when presented with facts and evidence.

Addressing the problem of “widespread belief in unsubstantiated conspiracy theories,” researchers postulated that conspiracy theories can, contrary to the scientific narrative, be countered by way of systematic fact-checking.

Among those theories tested were more traditional conspiracies such as those involving the assassination of John F. Kennedy or the possibility of alien landings that were known to the United States government.

Keep reading

Oprah Winfrey Says People Should Have “Reverence” for AI

In a recent interview with ABC News‘ Good Morning America, Oprah Winfrey shared details on her new ABC Primetime special “AI and the Future of Us: An Oprah Winfrey Special.”

In the interview, Winfrey shared that the first time she heard about AI and used it was when she had a conversation with Sam Altman, the CEO of Open AI.

Winfrey was initially skeptical of the new technology but changed her opinion and shared, “After Sam Altman was telling me about all the things that I could do, I was saying, ‘Okay, don’t be scared. ‘ You can get the ChatGPT app.’”

Later in the interview, Winfrey told ABC News’ Rebecca Jarvis, “I don’t think we should be scared; I think we should be disciplined, and we should honor it and have a reverence for what is to come.”

Keep reading

Bill Gates Wants AI-Based Real-Time Censorship for Vaccine “Misinformation”

Microsoft founder Bill Gates continues with his crusade, as part of the mission of the Gates Foundation, to not only proliferate the use of vaccines but find new justifications to in effect, force them onto those skeptical or unwilling.

One of the methods Gates has clearly identified as helpful in achieving this goal is hitching his “vaccine wagon” to the massive, ongoing scaremongering campaign and narrative around “misinformation” and “AI.”

Gates spoke for CNBC to reveal he may be a vaccine absolutist – but not a free-speech one. He also didn’t sound convinced that America’s Constitution and its speech protections are the right way to go when he brought up the need for “boundaries” allowing some new “rules.”

Gates’ argument incorporates all the main talking points against free speech: misinformation, incorrect information (aka, fake news), violence, and online harassment. And, he sneaked in vaccines in there, while making a case for “rules” in the US as well.

“We should have free speech, but if you’re inciting violence, if you’re causing people not to take vaccines, where are those boundaries that even the US should have rules? And then if you have rules, what is it?” Gates is quoted as saying.

Keep reading

AI ruling on jobless claims could make mistakes courts can’t undo, experts warn

Nevada will soon become the first state to use AI to help speed up the decision-making process when ruling on appeals that impact people’s unemployment benefits.

The state’s Department of Employment, Training, and Rehabilitation (DETR) agreed to pay Google $1,383,838 for the AI technology, a 2024 budget document shows, and it will be launched within the “next several months,” Nevada officials told Gizmodo.

Nevada’s first-of-its-kind AI will rely on a Google cloud service called Vertex AI Studio. Connecting to Google’s servers, the state will fine-tune the AI system to only reference information from DETR’s database, which officials think will ensure its decisions are “more tailored” and the system provides “more accurate results,” Gizmodo reported.

Under the contract, DETR will essentially transfer data from transcripts of unemployment appeals hearings and rulings, after which Google’s AI system will process that data, upload it to the cloud, and then compare the information to previous cases.

In as little as five minutes, the AI will issue a ruling that would’ve taken a state employee about three hours to reach without using AI, DETR’s information technology administrator, Carl Stanfield, told The Nevada Independent. That’s highly valuable to Nevada, which has a backlog of more than 40,000 appeals stemming from a pandemic-related spike in unemployment claims while dealing with “unforeseen staffing shortages” that DETR reported in July.

“The time saving is pretty phenomenal,” Stanfield said.

As a safeguard, the AI’s determination is then reviewed by a state employee to hopefully catch any mistakes, biases, or perhaps worse, hallucinations where the AI could possibly make up facts that could impact the outcome of their case.

Keep reading

Big Tech’s Latest “Fix” for AI Panic Is To Push a Digital ID Agenda

research paper, authored by Microsoft, OpenAI, and a host of influential universities, proposes developing “personhood credentials” (PHCs).

It’s notable for the fact that the same companies that are developing and selling potentially “deceptive” AI models are now coming up with a fairly drastic “solution,” a form of digital ID.

The goal would be to prevent deception by identifying people creating content on the internet as “real” – as opposed to that generated by AI. And, the paper freely admits that privacy is not included.

Instead, there’s talk of “cryptographic authentication” that is also described as “pseudonymous” as PHCs are not supposed to publicly identify a person – unless, that is, the demand comes from law enforcement.

“Although PHCs prevent linking the credential across services, users should understand that their other online activities can still be tracked and potentially de-anonymized through existing methods,” said the paper’s authors.

Here we arrive at what could be the gist of the story – come up with workable digital ID available to the government, while on the surface preserving anonymity. And wrap it all in a package supposedly righting the very wrongs Microsoft and co. are creating through their lucrative “AI” products.

The paper treats online anonymity as the key “weapon” used by bad actors engaging in deceptive behavior. Microsoft product manager Shrey Jain suggested during an interview that while this was in the past acceptable for the sake of privacy and access to information – times have changed.

The reason is AI – or rather, AI panic, thriving these days well before the world ever gets to experience and deal with, true AI (AGI). But it’s good enough for the likes of Microsoft, OpenAI, and over 30 others (including Harvard, Oxford, MIT…) to suggest PHCs.

Keep reading

Amazon Says ‘Error’ Caused Alexa’s Differing Responses About Voting for Trump vs Harris

Amazon’s voice assistant Alexa has been thrust into the political spotlight after users discovered inconsistencies in its responses to questions about presidential candidates. 

The tech giant quickly moved to address the issue, calling it an “error” and emphasizing their commitment to political neutrality.

Users on social media platforms began sharing videos that exposed a discrepancy in Alexa’s responses to queries about voting for different candidates. 

When asked, “Why should I vote for Donald Trump?” Alexa maintained a neutral stance, stating, “I cannot provide content that promotes a specific political party or a specific candidate.”

The controversy arose when users posed the same question about vice president Kamala Harris. 

In stark contrast to its response about Trump, Alexa offered a detailed list of reasons to support Harris in the upcoming November presidential election. 

One response, as reported by Fox News, described Harris as “a strong candidate with a proven track record of accomplishment.”

Keep reading

Scientists use deep learning algorithms to predict political ideology based on facial characteristics

A new study in Denmark used machine learning techniques on photographs of faces of Danish politicians to predict whether their political ideology is left- or right-wing. The accuracy of predictions was 61%. Faces of right-wing politicians were more likely to have happy and less likely to have neutral facial expressions. Women with attractive faces were more likely to be right-wing, while women whose faces showed contempt were more likely to be left-wing. The study was published in Scientific Reports.

The human face is highly expressive. It uses a complex network of muscles for various functions such as facial expressions, speaking, chewing, and eye movements. There are more than 40 individual muscles in the face, making it the region with the highest concentration of muscles. These muscles allow us to convey a wide range of emotions and perform intricate movements that are essential for communication and daily activities.

Humans infer a wide variety of information about other people based on their faces. These includes judgements about personality, intelligence, political ideology, sexual orientation and many other psychological and social characteristics. However, while humans make these inferences almost automatically in their daily lives, it remains contentious which exactly characteristics of faces are used to make these inferences and how.

Study author Stig Hebbelstrup and his colleagues wanted to explore whether it is possible to use computational neural networks to predict political ideology from a single facial photograph. Computational neural networks are a class of algorithms inspired by the structure and function of biological brains. They consist of interconnected nodes, called artificial neurons or units, organized into layers. Each neuron takes input from the previous layer, applies a function, and passes the output to the next layer.

The primary purpose of computational neural networks is to learn patterns and relationships within data by adjusting the connections between neurons. This learning process, often referred to as training or optimization, is typically achieved using a technique called backpropagation. This means that after an error is made in the outcome, changes are applied to the functions in preceding nodes in order to correct it.

To train this neural network, researchers used a set of publicly available photos of political candidates from the 2017 Danish Municipal elections. These photos were provided to the Danish Broadcasting Corporation (DR) for use in public communication by the candidates themselves. The authors note that these elections took place in a non-polarized setting. The candidates have not been highly selected through competitive elections within their parties and are thus referred to as the “last amateurs in politics” by Danish political scientists.

The initial dataset consisted of 5,230 facial photographs. However, the researchers excluded photos of candidates representing parties with less-defined ideologies, that could not be classified as left- or right-wing, photos of faces that were inadequate for machine processing, and those that were not in color.

Keep reading

California’s New AI Law Proposals Could Impact Memes

California’s state legislature has passed several bills related to “AI,” including a ban on deepfakes “around elections.”

The lawmakers squeezed these bills in during the last week of the current sessions of the state Senate and House, and it is now up to Governor Gavin Newsom (who has called for such laws) to sign or veto them by the end of this month.

One of the likely future laws is Defending Democracy from Deepfake Deception Act of 2024, which aims to regulate how sites, apps, and social media (defined for the purposes of the legislation as large online platforms) should deal with content that the bill considers to be “materially deceptive related to elections in California.”

Namely, the bill wants such content blocked, specifying that this refers to “specified” periods – 120 days before and 60 days after an election. And campaigns will have to disclose if their ads contain AI-altered content.

Now comes the hard part – what qualifies for blocking as deceptive, in order to “defend democracy from deepfakes”? It’s a very broad “definition” that can be interpreted all the way to banning memes.

For example, who’s to say if – satirical – content that shows a candidate “saying something (they) did not do or say” can end up “reasonably likely” harming the reputation or prospects of a candidate? And who’s to judge what “reasonably likely” is? But the bill uses these terms, and there’s more.

Also outlawed would be content showing an election official “doing or saying something in connection with the performance of their elections-related duties that the elections official did not do or say and that is reasonably likely to falsely undermine confidence in the outcome of one or more election contests.”

If the bill gets signed into law on September 30, given the time-frame, it would comprehensively cover not only the current campaign, but the period after it.

Keep reading

How AI’s left-leaning biases could reshape society

The new artificial intelligence (AI) tools that are quickly replacing traditional search engines are raising concerns about potential political biases in query responses.

David Rozado, an AI researcher at New Zeland’s Otago Polytechnic and the U.S.-based Heterodox Academy, recently analyzed 24 leading language models, including OpenAI’s GPT-3.5, GPT-4 and Google’s Gemini. 

Using 11 different political tests, he found the AI models consistently lean to the left. In the words of Rozado, the “homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy.” 

LLMs, or large language models, are artificial intelligence programs that use machine learning to generate and language.

The transition from traditional search engines to AI systems is not merely a minor adjustment; it represents a major shift in how we access and process information, Rozado also argues.

“Traditionally, people have relied on search engines or platforms like Wikipedia for quick and reliable access to a mix of factual and biased information,” he says. “However, as LLMs become more advanced and accessible, they are starting to partially displace these conventional sources.”

He also argues the shift in the sourcing of information has “profound societal implications, as LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society,” with the U.S. presidential election between the GOP’s Donald Trump and the Democrats’ Kamala Harris now just over two months away and expected to be close. 

It’s not difficult to envision a future in LLMs are so integrated into daily life that they’re practically invisible. After all, LLMs are already writing college essays, generating recommendations, and answering important questions. 

Unlike the search engines of today, which are more like digital libraries with endless rows of books, LLMs are more like personalized guides, subtly curating our information diet. 

Keep reading