The Responsible Lie: How AI Sells Conviction Without Truth

The widespread excitement around generative AI, particularly large language models (LLMs) like ChatGPT, Gemini, Grok, and DeepSeek, is built on a fundamental misunderstanding. While these systems impress users with articulate responses and seemingly reasoned arguments, the truth is that what appears to be “reasoning” is nothing more than a sophisticated form of mimicry.

These models aren’t searching for truth through facts and logical arguments—they’re predicting text based on patterns in the vast datasets they’re “trained” on. That’s not intelligence—and it isn’t reasoning. And if their “training” data is itself biased, then we’ve got real problems.

I’m sure it will surprise eager AI users to learn that the architecture at the core of LLMs is fuzzy—and incompatible with structured logic or causality. The thinking isn’t real, it’s simulated, and is not even sequential. What people mistake for understanding is actually statistical association.

Much-hyped new features like “chain-of-thought” explanations are tricks designed to impress the user. What users are actually seeing is best described as a kind of rationalization generated after the model has already arrived at its answer via probabilistic prediction. The illusion, however, is powerful enough to make users believe the machine is engaging in genuine deliberation. And this illusion does more than just mislead—it justifies.

LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies, and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well. And when “reasoning” is just an after-the-fact justification of whatever the model has already decided, it becomes a powerful propaganda device.

There is no shortage of evidence for this.

A recent conversation I initiated with DeepSeek about systemic racism, later uploaded back to the chatbot for self-critique, revealed the model committing (and recognizing!) a barrage of logical fallacies, which were seeded with totally made-up studies and numbers. When challenged, the AI euphemistically termed one of its lies a “hypothetical composite.” When further pressed, DeepSeek apologized for another “misstep,” then adjusted its tactics to match the competence of the opposing argument. This is not a pursuit of accuracy—it’s an exercise in persuasion.

similar debate with Google’s Gemini—the model that became notorious for being laughably woke—involved similar persuasive argumentation. At the end, the model euphemistically acknowledged its argument’s weakness and tacitly confessed its dishonesty.

Keep reading

Man Killed in Road Rage Incident Brought Back to Life and Speaks to His Killer in Court via AI-Powered Video

A Chandler, Arizona courtroom recently witnessed the use of artificial intelligence to resurrect the voice of a deceased man, Christopher Pelkey, during the sentencing of his killer, Gabriel Horcasitas.

Pelkey, shot dead in a 2021 road rage incident, was digitally recreated using AI to deliver a scripted “impact statement” written by his sister, Stacey Wales.

Horcasitas, 50, was sentenced to 10.5 years for manslaughter after shooting Pelkey, 37, when the latter approached his car in a heated confrontation.

“To Gabriel Horcasitas, the man who shot me: it is a shame we encountered each other that day in those circumstances,” according to the AI-generated video, featuring a digitally aged photograph of Pelkey.

“In another life, we probably could have been friends. I believe in forgiveness and God, who forgives. I always have, and still do,” the video said, which moved the judge, who openly praised the AI technology, to hand down a sentence a full year longer than prosecutors requested.

Keep reading

Palantir Partners with NATO on Controversial AI Project Maven

In late March, the North American Treaty Organization (NATO) Communications and Information Agency (NCIA) and Palantir Technologies Inc. announced a new agreement for the Palantir Maven Smart System NATO (MSS NATO) to be deployed by NATO’s Allied Command Operations (ACO).

Palantir was co-founded by Alex Karp and Peter Thiel, the Technocrat-Zionists who also happen to be Steering Committee members of the Bilderberg Group.

The new deal will see NCIA partner with Supreme Headquarters Allied Powers Europe (SHAPE) and Palantir to deliver Maven Smart System NATO to the Warfighter. A move which NCIA general manger Ludwig Decamps says will provide “customized state-of-the-art AI capabilities to the Alliance” and allow NATO to “operate effectively and decisively.”

The Maven Smart System (MSS) uses AI-generated algorithms and memory learning capabilities to scan and identify enemy systems.

Palantir’s MSS NATO makes use of “cutting-edge artificial intelligence (AI)” for core military operations, including using large language models (LLMs) for generative and machine learning.

Shon Manasco, Senior Counselor at Palantir Technologies, said the arrangement with NCIA and SHAPE will “bolster deterrence by deploying an AI-enabled warfighting platform”.

General Markus Laubenthal, SHAPE Chief of Staff, said Maven will allow NATO to be “more agile, adaptable, and responsive to emerging threats”.

SHAPE is the headquarters and commander of NATO’s ACO based near Mons, Belgium.

Keep reading

Police Are Now Using AI To Create Fake Undercover Profiles Online

American police departments near the United States-Mexico border are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on “college protesters,” “radicalized” political activists, and suspected drug and human traffickers, according to internal documents, contracts, and communications 404 Media obtained via public records requests.

Massive Blue, the New York-based company that is selling police departments this technology, calls its product Overwatch, which it markets as an “AI-powered force multiplier for public safety” that “deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels.” According to a presentation obtained by 404 Media, Massive Blue is offering cops these virtual personas that can be deployed across the internet with the express purpose of interacting with suspects over text messages and social media.

Massive Blue lists “border security,” “school safety,” and stopping “human trafficking” among Overwatch’s use cases. The technology—which as of last summer had not led to any known arrests—demonstrates the types of social media monitoring and undercover tools private companies are pitching to police and border agents. Concerns about tools like Massive Blue have taken on new urgency considering that the Trump administration has revoked the visas of hundreds of students, many of whom have protested against Israel’s war in Gaza.

404 Media obtained a presentation showing some of these AI characters. These include a “radicalized AI” “protest persona,” which poses as a 36-year-old divorced woman who is lonely, has no children, is interested in baking, activism, and “body positivity.” Another AI persona in the presentation is described as a “‘Honeypot’ AI Persona.” Her backstory says she’s a 25-year-old from Dearborn, Michigan whose parents emigrated from Yemen, and who speaks the Sanaani dialect of Arabic. The presentation also says she uses various social media apps, that she’s on Telegram and Signal, and that she has US and international SMS capabilities. Other personas are a 14 year-old boy “child trafficking AI persona,” an “AI pimp persona,” “college protestor” [sic], “external recruiter for protests,” “escorts,” and “juveniles.

Keep reading

Artificial General Intelligence (AGI): Can It Really Think Like A Human?

What is AGI?

When the lines blur between man and machine, you’re looking at artificial general intelligence (AGI). Unlike its counterpart, artificial narrow intelligence (ANI), which is the use of AI for solving individual problem statements, AGI represents artificial intelligence that can understand, learn and apply knowledge in a way that is indistinguishable from human cognition.

AGI is still theoretical, but the prospect of artificial intelligence being able to holistically replace human input and judgment has naturally attracted plenty of interest, with researchers, technologists and academics alike seeking to bring the concept of AGI to reality. 

Yet another strand of prevailing research seeks to explore the feasibility and implications of AGI vs. ANI in a world increasingly shaped by AI capabilities. 

Indeed, while ANI has already transformed various industries, AGI’s potential goes far beyond. Imagine a world where machines can not only assist humans in their tasks but also proactively understand the drivers behind specific tasks, predict outcomes, and autonomously create innovative solutions to achieve optimal results. This paradigm shift could revolutionize healthcare, education, transportation and countless other fields.

Keep reading

The Next Phase Of Surveillance? Getting Under Your Skin

AI and transhumanism: Hackable animals

My friends, let me introduce you to Yuval Noah Harari, a man chock-full of big ideas. He explained during the COVID crisis:

“COVID is critical because this is what convinces people to accept, to legitimize, total biometric surveillance. If we want to stop this epidemic, we need not just to monitor people, we need to monitor what’s happening under their skin.”

In a 60 Minutes interview with Anderson Cooper, Harari repeated this idea: “What we have seen so far is corporations and governments collecting data about where we go, who we meet, what movies we watch.

The next phase is the surveillance going under our skin … He likewise told India Today, when commenting on changes accepted by the population during COVID-19:

“We now see mass surveillance systems established even in democratic countries which previously rejected them, and we also see a change in the nature of surveillance. Previously, surveillance was mainly above the skin; now we want it under the skin.

“Governments want to know not just where we go or who we meet. They want to know what’s happening under our skin: what is our body temperature; what is our blood pressure; what is our medical condition?”

Harari is clearly a man who wants to … get under your skin. He just might succeed.

Another recent interview finds him waxing philosophical:

“Now humans are developing even bigger powers than ever before. We are really acquiring divine powers of creation and destruction. We are really upgrading humans into gods. We are acquiring, for instance, the power to re-engineer human life.”

As Kierkegaard once said of Hegel when he talks about the Absolute, when Harari talks about the future, he sounds like he’s going up in a balloon.

Forgive me, but a few last nuggets from professor Harari will round out the picture of his philosophy, and his lofty hopes and dreams:

Humans are now hackable animals. You know, the whole idea that humans have this soul or spirit, and they have free will and nobody knows what’s happening inside me, so, whatever I choose, whether in the election or in the supermarket, that’s my free will — that’s over.”

Harari explains that to hack human being, you need a lot of computing power and a lot of biometric data, which was not possible until recently with the advent of AI.

Keep reading

New York Is Quietly Rolling Out Precrime Surveillance Tech

Picture this: it’s rush hour in New York City. A guy in a Mets cap mutters to himself on the F train platform, pacing in tight circles. Nearby, a woman checks her phone five times in ten seconds. Overhead, cameras are watching. Behind the cameras? A machine. And behind that machine? An army of bureaucrats who’ve convinced themselves that bad vibes are now a crime category.

Welcome to the MTA’s shiny new plan for keeping you safe: an AI surveillance system designed to detect “irrational or concerning conduct” before anything happens. Not after a crime. Not even during. Before. The sort of thing that, in less tech-horny times, might’ve been called “having a bad day.”

MTA Chief Security Officer Michael Kemper, the man standing between us and a future where talking to yourself means a visit from the NYPD, is calling it “predictive prevention.”

“AI is the future,” Kemper assured the MTA’s safety committee.

So far, the MTA insists this isn’t about watching you, per se. It’s watching your behavior. Aaron Donovan, MTA spokesperson and professional splitter of hairs, clarified: “The technology being explored by the MTA is designed to identify behaviors, not people.”

And don’t worry about facial recognition, they say. That’s off the table. For now. Just ignore the dozens of vendors currently salivating over multimillion-dollar public contracts to install “emotion detection” software that’s about as accurate as your aunt’s horoscope app.

Keep reading

State Bar of California Makes ‘Staggering Admission’

The State Bar of California has disclosed that some multiple-choice questions in a problem-plagued bar exam were developed with the aid of artificial intelligence. The legal licensing body said in a news release that it will ask the California Supreme Court to adjust test scores for those who took its February bar exam.

  • “The debacle that was the February 2025 bar exam is worse than we imagined,” Mary Basick, assistant dean of academic skills at the University of California, Irvine, Law School, told the Los Angeles Times. “I’m almost speechless. Having the questions drafted by non-lawyers using artificial intelligence is just unbelievable.”
  • In February, the new exam led to complaints after many test-takers were unable to complete their bar exams. The online testing platforms repeatedly crashed before some applicants even started. Others struggled to finish and save essays, experienced screen lags and error messages, and could not copy and paste text, the Times reported earlier.
  • According to a recent presentation by the State Bar, 100 of the 171 scored multiple-choice questions were made by the company Kaplan Exam Services and 48 were drawn from a first-year law students exam, the AP reports. A smaller subset of 23 scored questions were made by ACS Ventures, the State Bar’s psychometrician, and developed with artificial intelligence.
  • “It’s a staggering admission,” says Katie Moran, an associate professor at the University of San Francisco School of Law who specializes in bar exam preparation. “The State Bar has admitted they employed a company to have a non-lawyer use AI to draft questions that were given on the actual bar exam,” she says. “They then paid that same company to assess and ultimately approve of the questions on the exam, including the questions the company authored.”

Keep reading

‘Cyborg 1.0’: World’s First Robocop Debuts With Facial Recognition And 360° Camera Vision

Thailand has debuted the world’s first ‘Robocop’ designed to detect and prevent crime with advanced AI.

Equipped with 360-degree cameras for eyes, the cutting-edge cyborg maintains constant surveillance with real-time monitoring. The robocop, named Police Colonel Nakhonpathom Plod Phai, meaning “Nakhonpathom is safe,” was unveiled during the Songkran festival in Nakhon Pathom province on Wednesday. The debut was announced via a Facebook post by the Royal Thai Police, according to a report by The Sun.

The robocop is also able to detect weapons, such as knives and wooden batons. In neighboring China, humanoid robots have started supporting police patrols.

Interesting Engineering reports:

In Shenzhen, PM01 model robots developed by EngineAI have been deployed alongside officers, wearing high-visibility police vests. These robots have been seen engaging with pedestrians—waving, shaking hands, and responding to voice commands—according to local media reports. A recent video shows a PM01 robot waving to a crowd, sparking curiosity about its purpose in law enforcement.

First launched in December 2024, the PM01 features agile mobility, an interactive touchscreen, and an open-source platform. This design allows developers worldwide to contribute to its evolution by adding new features and capabilities through secondary development.

Last year, Logon Technology, a Chinese robotics company, unveiled the RT-G autonomous spherical robot, described as a “technological breakthrough,” with an army of these spherical robocops spotted rolling through cities across China, The Sun said. The robocop’s debut underscores the growing importance of robot technology. During Tesla’s Q1 2025 All-Hands meeting, CEO Elon Musk revealed the the company is preparing aiming to begin the production of its own humanoid, Optimus, this year.

Keep reading

Report: AI Company DeepSeek ‘Funnels’ American User Data To Red China

The Chinese artificial intelligence (AI) company DeepSeek is allegedly syphoning American user data to China’s communist government, according to a new congressional report.

Released on Wednesday by the House Select Committee on the Chinese Communist Party (CCP), the 16-page analysis contends that the China-based AI firm “collects detailed user data, which it transmits via backend infrastructure that is connected to China Mobile.”

A state-owned telecommunications giant, China Mobile was flagged by the Pentagon earlier this year for having ties to Beijing’s military. In 2019, the Federal Communications Commission (FCC) prohibited the firm from operating within the U.S. over concerns that “unauthorized access to customer … data could create irreparable damage to U.S. national security.”

The FCC subsequently added China Mobile to its list of “national security threats” in 2022, according to ABC News.

“While the extent of data transmission remains unconfirmed, DeepSeek’s integration with China Mobile infrastructure raises serious concerns about potential foreign access to Americans’ private information,” the House analysis reads. “By relying on China Mobile’s infrastructure, DeepSeek ensures that Americans’ data is stored and transmitted through networks controlled by the Chinese government.”

Among the data DeepSeek reportedly collects from users who utilize its chatbot function are their “chat history, device details, and even the way a person types,” according to the report. The House committee cited DeepSeek’s privacy policy, which discloses that the company stores the information it gathers from users “in secure servers located in the People’s Republic of China.”

It’s worth noting that the CCP passed a so-called “national intelligence law” in 2017 granting the communist government access to data held by businesses operating in China. As noted by the Department of Homeland Security, this law “compels all PRC firms and entities to support, assist, and cooperate with the PRC intelligence services, creating a legal obligation for those entities to turn over data collected abroad and domestically to the PRC.”

Keep reading