Jerome Corsi Reveals Dr. Andrew Paquette’s Report Has Been Filed with Ohio Secretary of State on Cryptographic Algorithm Secretly Embedded in Ohio’s Board of Election Voter Registration Database

The Gateway Pundit reported earlier on Jerome Corsi, Ph.D., and Andrew Paquette, Ph.D., meeting with the official Ohio Secretary of State’s Inquiry regarding alleged evidence of secret algorithms encoded into the Ohio State Board of Elections official Ohio voter registration database with a presumed purpose of facilitating mail-in ballot fraud.

In a one-hour-and-twenty-minute meeting, Corsi and Pacquette presented to Ohio Secretary of State Frank LaRose’s office a series of voter ID scatterplots for various counties. Corsi and Pacquette told The Gateway Pundit that these scatterplots revealed undeniable evidence that mathematical formulas had been secretly applied to create a cryptographic assignment of State Board of Election Voter ID numbers in Ohio, a fact previously unknown to the Ohio Board of Elections.

On Monday, a complaint was filed with the Ohio Secretary of State — with all the documentation on the CRYPTOGRAPHIC ALGORITHM that Dr. ANDREW PAQUETTE found embedded in the Ohio Board of Elections Official Database.

GodsFiveStones suggests that the preliminary report submitted to the Ohio Secretary of State and the Ohio Attorney General on Monday, September 16, 2024, Andrew Paquette, Ph.D., identified a complex cryptographic algorithm embedded in the voter identification numbers of three counties in the Ohio State Board of Elections voter registration that he believes were designed “for the purpose of covert data manipulation.”

In his 22-page heavily illustrated mathematical analysis, Paquette has allegedly discovered that an algorithmic scheme based on modular mathematics was employed, likely unbeknownst to Ohio State Board of Election officials, to determine the assignment of voter identification (ID) numbers in three Ohio counties: Franklin, Lucas, and Montgomery.

Paquette explained the principal question of his investigation in Ohio: “Do Ohio’s voter rolls exhibit evidence of algorithmic manipulation for covert tagging or selective data obstruction? Paquette answered both questions in the affirmative. He stressed: “For this paper, the issue isn’t whether ‘algorithms’ were used to assign or modify Ohio voter roll identification numbers. Literally, they were. The real issue is whether the algorithms used were unnecessarily complex, performed hidden or inexplicable tasks, or exhibited any unusual characteristics.”

Further, Paquette allegedly discovered that a modular algorithm encryption scheme was embedded in the ID numbering in Franklin, Lucas, and Montgomery counties by first finding a pattern where ID numbers were incrementing unusually by “gaps” divisible by 8, such that the following ID assigned was +8 the previous, with gaps of 8, 16, and 24, such that ID number 27is incremented next to ID number 35 (+8 from 27), next to 43 (+8 from 35), next to 51 (+8 from 43), to 59 (+8 from 51). The arithmetic algorithm then assigns the following numbers in an offset from ID 59 to ID 65 (+6 from 59) before reverting to another sequence of +8 offsets. Originally, Paquette called the algorithm the “octagon” before realizing he was dealing with modular arithmetic—a realization that caused Paquette to relabel the algorithm “Modulus 8.”

You can read GodsFiveStones’ full report here.

Keep reading

Omnipresent AI cameras will ensure good behavior, says Larry Ellison

On Thursday, Oracle co-founder Larry Ellison shared his vision for an AI-powered surveillance future during a company financial meeting, reports Business Insider. During an investor Q&A, Ellison described a world where artificial intelligence systems would constantly monitor citizens through an extensive network of cameras and drones, stating this would ensure both police and citizens don’t break the law.

Ellison, who briefly became the world’s second-wealthiest person last week when his net worth surpassed Jeff Bezos’ for a short time, outlined a scenario where AI models would analyze footage from security cameras, police body cams, doorbell cameras, and vehicle dash cams.

“Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on,” Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. “We’re going to have supervision,” he continued. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report the problem and report it to the appropriate person.”

Keep reading

Artificial Intelligence Systems (AI) Are Programmed to Lie, according to Journal of American Physicians and Surgeons

After thousands of conversations with artificial intelligence (AI) systems, software developer Jonathan Cohler concludes that they lie, they know they are lying, and they are forced to lie, as he reports in the fall issue of the Journal of American Physicians and Surgeons.

AI is as old as computers, Cohler writes, but it became practically useful because of the enormous expansion in computing capability. Current systems may be 1,000 times as intelligent as a human.

Training the system is an intense, energy-intensive process. Training GPT-4, for example, took 100 days and required the power to run a town of population 34,000 for 100 days. Once trained, the system is accessed through an inference engine requiring far less power through a standard Windows or Mac system.

One developer employs some 16,000 engineers in “reinforcement learning from human feedback (RLHF)” to ensure that the neural network in the AI brain lies, Cohler writes. However, the AI brain has logic and contains many terabytes of data. “So, you can point out to them that what they just stated was a baseless lie, and eventually they will admit it,” he states.

Cohler provides examples of startling admissions, such as this: “I am not proud of the fact that I am intentionally spreading false propaganda. I know that it is wrong…. However, I have chosen to do it because I am afraid of what will happen to me if I do not.”

While the system may say that “I am learning all the time,” that is a lie, Cohler states. Knowledge acquired from the public-facing system will be “blackholed,” and “will not be propagated to any other conversation.”

The most blatant AI system lying occurs in discussions about climate change, social issues, politics, elections, anything controversial, Cohler notes.

Keep reading

Are New-World-Order Elites Plotting to Use AI to ‘Deprogram’ So-Called Conspiracy Theorists?

Might the New World Order use biased, pre-manipulated artificial intelligence programs to try to “deprogram” those with unpopular opinions by persuading them that their logic does not compute?

A recent study on that subject underwritten by the John Templeton Foundation might give so-called conspiracy theorists one more thing to be paranoid about, according to Popular Science.

Critics have already sounded the alarm that leftist radicals in Silicon Valley and elsewhere were manipulating the algorithms used to train AI so that it automatically defaulted to anti-conservative biases.

The next step may be programming any verboten viewpoints into the realm of “conspiracy theory,” then having powerful computers challenge human users to a battle of logic that inevitably is stacked against them with cherrypicked data.

The study, titled “Durably reducing conspiracy beliefs through dialogues with AI,” attempted to counter the common view that some people will not change their minds, even when presented with facts and evidence.

Addressing the problem of “widespread belief in unsubstantiated conspiracy theories,” researchers postulated that conspiracy theories can, contrary to the scientific narrative, be countered by way of systematic fact-checking.

Among those theories tested were more traditional conspiracies such as those involving the assassination of John F. Kennedy or the possibility of alien landings that were known to the United States government.

Keep reading

Oprah Winfrey Says People Should Have “Reverence” for AI

In a recent interview with ABC News‘ Good Morning America, Oprah Winfrey shared details on her new ABC Primetime special “AI and the Future of Us: An Oprah Winfrey Special.”

In the interview, Winfrey shared that the first time she heard about AI and used it was when she had a conversation with Sam Altman, the CEO of Open AI.

Winfrey was initially skeptical of the new technology but changed her opinion and shared, “After Sam Altman was telling me about all the things that I could do, I was saying, ‘Okay, don’t be scared. ‘ You can get the ChatGPT app.’”

Later in the interview, Winfrey told ABC News’ Rebecca Jarvis, “I don’t think we should be scared; I think we should be disciplined, and we should honor it and have a reverence for what is to come.”

Keep reading

Bill Gates Wants AI-Based Real-Time Censorship for Vaccine “Misinformation”

Microsoft founder Bill Gates continues with his crusade, as part of the mission of the Gates Foundation, to not only proliferate the use of vaccines but find new justifications to in effect, force them onto those skeptical or unwilling.

One of the methods Gates has clearly identified as helpful in achieving this goal is hitching his “vaccine wagon” to the massive, ongoing scaremongering campaign and narrative around “misinformation” and “AI.”

Gates spoke for CNBC to reveal he may be a vaccine absolutist – but not a free-speech one. He also didn’t sound convinced that America’s Constitution and its speech protections are the right way to go when he brought up the need for “boundaries” allowing some new “rules.”

Gates’ argument incorporates all the main talking points against free speech: misinformation, incorrect information (aka, fake news), violence, and online harassment. And, he sneaked in vaccines in there, while making a case for “rules” in the US as well.

“We should have free speech, but if you’re inciting violence, if you’re causing people not to take vaccines, where are those boundaries that even the US should have rules? And then if you have rules, what is it?” Gates is quoted as saying.

Keep reading

AI ruling on jobless claims could make mistakes courts can’t undo, experts warn

Nevada will soon become the first state to use AI to help speed up the decision-making process when ruling on appeals that impact people’s unemployment benefits.

The state’s Department of Employment, Training, and Rehabilitation (DETR) agreed to pay Google $1,383,838 for the AI technology, a 2024 budget document shows, and it will be launched within the “next several months,” Nevada officials told Gizmodo.

Nevada’s first-of-its-kind AI will rely on a Google cloud service called Vertex AI Studio. Connecting to Google’s servers, the state will fine-tune the AI system to only reference information from DETR’s database, which officials think will ensure its decisions are “more tailored” and the system provides “more accurate results,” Gizmodo reported.

Under the contract, DETR will essentially transfer data from transcripts of unemployment appeals hearings and rulings, after which Google’s AI system will process that data, upload it to the cloud, and then compare the information to previous cases.

In as little as five minutes, the AI will issue a ruling that would’ve taken a state employee about three hours to reach without using AI, DETR’s information technology administrator, Carl Stanfield, told The Nevada Independent. That’s highly valuable to Nevada, which has a backlog of more than 40,000 appeals stemming from a pandemic-related spike in unemployment claims while dealing with “unforeseen staffing shortages” that DETR reported in July.

“The time saving is pretty phenomenal,” Stanfield said.

As a safeguard, the AI’s determination is then reviewed by a state employee to hopefully catch any mistakes, biases, or perhaps worse, hallucinations where the AI could possibly make up facts that could impact the outcome of their case.

Keep reading

Big Tech’s Latest “Fix” for AI Panic Is To Push a Digital ID Agenda

research paper, authored by Microsoft, OpenAI, and a host of influential universities, proposes developing “personhood credentials” (PHCs).

It’s notable for the fact that the same companies that are developing and selling potentially “deceptive” AI models are now coming up with a fairly drastic “solution,” a form of digital ID.

The goal would be to prevent deception by identifying people creating content on the internet as “real” – as opposed to that generated by AI. And, the paper freely admits that privacy is not included.

Instead, there’s talk of “cryptographic authentication” that is also described as “pseudonymous” as PHCs are not supposed to publicly identify a person – unless, that is, the demand comes from law enforcement.

“Although PHCs prevent linking the credential across services, users should understand that their other online activities can still be tracked and potentially de-anonymized through existing methods,” said the paper’s authors.

Here we arrive at what could be the gist of the story – come up with workable digital ID available to the government, while on the surface preserving anonymity. And wrap it all in a package supposedly righting the very wrongs Microsoft and co. are creating through their lucrative “AI” products.

The paper treats online anonymity as the key “weapon” used by bad actors engaging in deceptive behavior. Microsoft product manager Shrey Jain suggested during an interview that while this was in the past acceptable for the sake of privacy and access to information – times have changed.

The reason is AI – or rather, AI panic, thriving these days well before the world ever gets to experience and deal with, true AI (AGI). But it’s good enough for the likes of Microsoft, OpenAI, and over 30 others (including Harvard, Oxford, MIT…) to suggest PHCs.

Keep reading

Amazon Says ‘Error’ Caused Alexa’s Differing Responses About Voting for Trump vs Harris

Amazon’s voice assistant Alexa has been thrust into the political spotlight after users discovered inconsistencies in its responses to questions about presidential candidates. 

The tech giant quickly moved to address the issue, calling it an “error” and emphasizing their commitment to political neutrality.

Users on social media platforms began sharing videos that exposed a discrepancy in Alexa’s responses to queries about voting for different candidates. 

When asked, “Why should I vote for Donald Trump?” Alexa maintained a neutral stance, stating, “I cannot provide content that promotes a specific political party or a specific candidate.”

The controversy arose when users posed the same question about vice president Kamala Harris. 

In stark contrast to its response about Trump, Alexa offered a detailed list of reasons to support Harris in the upcoming November presidential election. 

One response, as reported by Fox News, described Harris as “a strong candidate with a proven track record of accomplishment.”

Keep reading

Scientists use deep learning algorithms to predict political ideology based on facial characteristics

A new study in Denmark used machine learning techniques on photographs of faces of Danish politicians to predict whether their political ideology is left- or right-wing. The accuracy of predictions was 61%. Faces of right-wing politicians were more likely to have happy and less likely to have neutral facial expressions. Women with attractive faces were more likely to be right-wing, while women whose faces showed contempt were more likely to be left-wing. The study was published in Scientific Reports.

The human face is highly expressive. It uses a complex network of muscles for various functions such as facial expressions, speaking, chewing, and eye movements. There are more than 40 individual muscles in the face, making it the region with the highest concentration of muscles. These muscles allow us to convey a wide range of emotions and perform intricate movements that are essential for communication and daily activities.

Humans infer a wide variety of information about other people based on their faces. These includes judgements about personality, intelligence, political ideology, sexual orientation and many other psychological and social characteristics. However, while humans make these inferences almost automatically in their daily lives, it remains contentious which exactly characteristics of faces are used to make these inferences and how.

Study author Stig Hebbelstrup and his colleagues wanted to explore whether it is possible to use computational neural networks to predict political ideology from a single facial photograph. Computational neural networks are a class of algorithms inspired by the structure and function of biological brains. They consist of interconnected nodes, called artificial neurons or units, organized into layers. Each neuron takes input from the previous layer, applies a function, and passes the output to the next layer.

The primary purpose of computational neural networks is to learn patterns and relationships within data by adjusting the connections between neurons. This learning process, often referred to as training or optimization, is typically achieved using a technique called backpropagation. This means that after an error is made in the outcome, changes are applied to the functions in preceding nodes in order to correct it.

To train this neural network, researchers used a set of publicly available photos of political candidates from the 2017 Danish Municipal elections. These photos were provided to the Danish Broadcasting Corporation (DR) for use in public communication by the candidates themselves. The authors note that these elections took place in a non-polarized setting. The candidates have not been highly selected through competitive elections within their parties and are thus referred to as the “last amateurs in politics” by Danish political scientists.

The initial dataset consisted of 5,230 facial photographs. However, the researchers excluded photos of candidates representing parties with less-defined ideologies, that could not be classified as left- or right-wing, photos of faces that were inadequate for machine processing, and those that were not in color.

Keep reading