Nations meet at UN for ‘killer robot’ talks as regulation lags

Countries are meeting at the United Nations on Monday to revive efforts to regulate the kinds of AI-controlled autonomous weapons increasingly used in modern warfare, as experts warn time is running out to put guardrails on new lethal technology.

Autonomous and artificial intelligence-assisted weapons systems are already playing a greater role in conflicts from Ukraine to Gaza. And rising defence spending worldwide promises to provide a further boost for burgeoning AI-assisted military technology.

Progress towards establishing global rules governing their development and use, however, has not kept pace. And internationally binding standards remain virtually non-existent.

Since 2014, countries that are part of the Convention on Conventional Weapons (CCW) have been meeting in Geneva to discuss a potential ban fully autonomous systems that operate without meaningful human control and regulate others.

U.N. Secretary-General Antonio Guterres has set a 2026 deadline for states to establish clear rules on AI weapon use. But human rights groups warn that consensus among governments is lacking.

Alexander Kmentt, head of arms control at Austria’s foreign ministry, said that must quickly change.

“Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don’t come to pass,” he told Reuters.

Monday’s gathering of the U.N. General Assembly in New York will be the body’s first meeting dedicated to autonomous weapons.

Though not legally binding, diplomatic officials want the consultations to ramp up pressure on military powers that are resisting regulation due to concerns the rules could dull the technology’s battlefield advantages.

Campaign groups hope the meeting, which will also address critical issues not covered by the CCW, including ethical and human rights concerns and the use of autonomous weapons by non-state actors, will push states to agree on a legal instrument.

They view it as a crucial litmus test on whether countries are able to bridge divisions ahead of the next round of CCW talks in September.

“This issue needs clarification through a legally binding treaty. The technology is moving so fast,” said Patrick Wilcken, Amnesty International’s Researcher on Military, Security and Policing.

“The idea that you wouldn’t want to rule out the delegation of life or death decisions … to a machine seems extraordinary.”

Keep reading

FDA To Deploy Artificial Intelligence Across Agency

The Food and Drug Administration (FDA) is planning to roll out artificial intelligence across the agency following a successful pilot program.

FDA Commissioner Dr. Marty Makary has directed all FDA centers to immediately start using artificial intelligence (AI) and fully integrate it by the end of June, the FDA said in a May 8 statement.

By June 30, all centers will use what the agency described as a “common, secure generative AI system integrated with FDA’s internal data platforms.”

“I was blown away by the success of our first AI-assisted scientific review pilot,“ Makary said in a statement. ”We need to value our scientists’ time and reduce the amount of non-productive busywork that has historically consumed much of the review process. The agency-wide deployment of these capabilities holds tremendous promise in accelerating the review time for new therapies.”

AI refers to computer systems that can carry out complex tasks typically performed by humans.

“AI can be generally described as a branch of computer science, statistics, and engineering that uses algorithms or models to perform tasks and exhibit behaviors such as learning, making decisions, and making predictions,” FDA officials said in 2023.

Makary said on Thursday that past years have featured discussions on utilizing AI and that it’s time to start using it to save time, with some tasks that once took days now taking mere minutes.

“We at the FDA now have to ask big questions that we’ve never asked before. Why does it take over 10 years for a new drug to come to market? Why are we not modernized with AI and other things? We’ve just completed our first AI-assisted scientific review for a product and that’s just the beginning,” he said earlier in the week on social media platform X.

The rollout across the FDA is being coordinated by Jeremy Walsh, Booz Allen Hamilton’s former chief technologist, who was recently appointed as the agency’s chief AI officer, and Sridhar Mantha, who previously led the Office of Business Informatics at the FDA’s Center for Drug Evaluation and Research.

Keep reading

Pope Leo Warns Over AI As MIT Researcher Finds 90% Probability Of ‘Existential Threat’

In his first formal audience as the newly elected pontiff, Pope Leo XIV identified artificial intelligence (AI) as one of the most critical matters facing humanity.

“In our own day,” Pope Leo declared, “the church offers everyone the treasury of its social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.” He linked this statement to the legacy of his namesake Leo XIII’s 1891 encyclical Rerum Novarum, which addressed workers’ rights and the moral dimensions of capitalism.

His remarks continued the direction charted by the late Pope Francis, who warned in his 2024 annual peace message that AI – lacking human values of compassion, mercy, morality and forgiveness – is too perilous to develop unchecked. Francis, who passed away on April 21, had called for an international treaty to regulate AI and insisted that the technology must remain “human-centric,” particularly in applications involving weapon systems or tools of governance.

Keep reading

The Responsible Lie: How AI Sells Conviction Without Truth

The widespread excitement around generative AI, particularly large language models (LLMs) like ChatGPT, Gemini, Grok, and DeepSeek, is built on a fundamental misunderstanding. While these systems impress users with articulate responses and seemingly reasoned arguments, the truth is that what appears to be “reasoning” is nothing more than a sophisticated form of mimicry.

These models aren’t searching for truth through facts and logical arguments—they’re predicting text based on patterns in the vast datasets they’re “trained” on. That’s not intelligence—and it isn’t reasoning. And if their “training” data is itself biased, then we’ve got real problems.

I’m sure it will surprise eager AI users to learn that the architecture at the core of LLMs is fuzzy—and incompatible with structured logic or causality. The thinking isn’t real, it’s simulated, and is not even sequential. What people mistake for understanding is actually statistical association.

Much-hyped new features like “chain-of-thought” explanations are tricks designed to impress the user. What users are actually seeing is best described as a kind of rationalization generated after the model has already arrived at its answer via probabilistic prediction. The illusion, however, is powerful enough to make users believe the machine is engaging in genuine deliberation. And this illusion does more than just mislead—it justifies.

LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies, and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well. And when “reasoning” is just an after-the-fact justification of whatever the model has already decided, it becomes a powerful propaganda device.

There is no shortage of evidence for this.

A recent conversation I initiated with DeepSeek about systemic racism, later uploaded back to the chatbot for self-critique, revealed the model committing (and recognizing!) a barrage of logical fallacies, which were seeded with totally made-up studies and numbers. When challenged, the AI euphemistically termed one of its lies a “hypothetical composite.” When further pressed, DeepSeek apologized for another “misstep,” then adjusted its tactics to match the competence of the opposing argument. This is not a pursuit of accuracy—it’s an exercise in persuasion.

similar debate with Google’s Gemini—the model that became notorious for being laughably woke—involved similar persuasive argumentation. At the end, the model euphemistically acknowledged its argument’s weakness and tacitly confessed its dishonesty.

Keep reading

Man Killed in Road Rage Incident Brought Back to Life and Speaks to His Killer in Court via AI-Powered Video

A Chandler, Arizona courtroom recently witnessed the use of artificial intelligence to resurrect the voice of a deceased man, Christopher Pelkey, during the sentencing of his killer, Gabriel Horcasitas.

Pelkey, shot dead in a 2021 road rage incident, was digitally recreated using AI to deliver a scripted “impact statement” written by his sister, Stacey Wales.

Horcasitas, 50, was sentenced to 10.5 years for manslaughter after shooting Pelkey, 37, when the latter approached his car in a heated confrontation.

“To Gabriel Horcasitas, the man who shot me: it is a shame we encountered each other that day in those circumstances,” according to the AI-generated video, featuring a digitally aged photograph of Pelkey.

“In another life, we probably could have been friends. I believe in forgiveness and God, who forgives. I always have, and still do,” the video said, which moved the judge, who openly praised the AI technology, to hand down a sentence a full year longer than prosecutors requested.

Keep reading

Palantir Partners with NATO on Controversial AI Project Maven

In late March, the North American Treaty Organization (NATO) Communications and Information Agency (NCIA) and Palantir Technologies Inc. announced a new agreement for the Palantir Maven Smart System NATO (MSS NATO) to be deployed by NATO’s Allied Command Operations (ACO).

Palantir was co-founded by Alex Karp and Peter Thiel, the Technocrat-Zionists who also happen to be Steering Committee members of the Bilderberg Group.

The new deal will see NCIA partner with Supreme Headquarters Allied Powers Europe (SHAPE) and Palantir to deliver Maven Smart System NATO to the Warfighter. A move which NCIA general manger Ludwig Decamps says will provide “customized state-of-the-art AI capabilities to the Alliance” and allow NATO to “operate effectively and decisively.”

The Maven Smart System (MSS) uses AI-generated algorithms and memory learning capabilities to scan and identify enemy systems.

Palantir’s MSS NATO makes use of “cutting-edge artificial intelligence (AI)” for core military operations, including using large language models (LLMs) for generative and machine learning.

Shon Manasco, Senior Counselor at Palantir Technologies, said the arrangement with NCIA and SHAPE will “bolster deterrence by deploying an AI-enabled warfighting platform”.

General Markus Laubenthal, SHAPE Chief of Staff, said Maven will allow NATO to be “more agile, adaptable, and responsive to emerging threats”.

SHAPE is the headquarters and commander of NATO’s ACO based near Mons, Belgium.

Keep reading

Police Are Now Using AI To Create Fake Undercover Profiles Online

American police departments near the United States-Mexico border are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on “college protesters,” “radicalized” political activists, and suspected drug and human traffickers, according to internal documents, contracts, and communications 404 Media obtained via public records requests.

Massive Blue, the New York-based company that is selling police departments this technology, calls its product Overwatch, which it markets as an “AI-powered force multiplier for public safety” that “deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels.” According to a presentation obtained by 404 Media, Massive Blue is offering cops these virtual personas that can be deployed across the internet with the express purpose of interacting with suspects over text messages and social media.

Massive Blue lists “border security,” “school safety,” and stopping “human trafficking” among Overwatch’s use cases. The technology—which as of last summer had not led to any known arrests—demonstrates the types of social media monitoring and undercover tools private companies are pitching to police and border agents. Concerns about tools like Massive Blue have taken on new urgency considering that the Trump administration has revoked the visas of hundreds of students, many of whom have protested against Israel’s war in Gaza.

404 Media obtained a presentation showing some of these AI characters. These include a “radicalized AI” “protest persona,” which poses as a 36-year-old divorced woman who is lonely, has no children, is interested in baking, activism, and “body positivity.” Another AI persona in the presentation is described as a “‘Honeypot’ AI Persona.” Her backstory says she’s a 25-year-old from Dearborn, Michigan whose parents emigrated from Yemen, and who speaks the Sanaani dialect of Arabic. The presentation also says she uses various social media apps, that she’s on Telegram and Signal, and that she has US and international SMS capabilities. Other personas are a 14 year-old boy “child trafficking AI persona,” an “AI pimp persona,” “college protestor” [sic], “external recruiter for protests,” “escorts,” and “juveniles.

Keep reading

Artificial General Intelligence (AGI): Can It Really Think Like A Human?

What is AGI?

When the lines blur between man and machine, you’re looking at artificial general intelligence (AGI). Unlike its counterpart, artificial narrow intelligence (ANI), which is the use of AI for solving individual problem statements, AGI represents artificial intelligence that can understand, learn and apply knowledge in a way that is indistinguishable from human cognition.

AGI is still theoretical, but the prospect of artificial intelligence being able to holistically replace human input and judgment has naturally attracted plenty of interest, with researchers, technologists and academics alike seeking to bring the concept of AGI to reality. 

Yet another strand of prevailing research seeks to explore the feasibility and implications of AGI vs. ANI in a world increasingly shaped by AI capabilities. 

Indeed, while ANI has already transformed various industries, AGI’s potential goes far beyond. Imagine a world where machines can not only assist humans in their tasks but also proactively understand the drivers behind specific tasks, predict outcomes, and autonomously create innovative solutions to achieve optimal results. This paradigm shift could revolutionize healthcare, education, transportation and countless other fields.

Keep reading

The Next Phase Of Surveillance? Getting Under Your Skin

AI and transhumanism: Hackable animals

My friends, let me introduce you to Yuval Noah Harari, a man chock-full of big ideas. He explained during the COVID crisis:

“COVID is critical because this is what convinces people to accept, to legitimize, total biometric surveillance. If we want to stop this epidemic, we need not just to monitor people, we need to monitor what’s happening under their skin.”

In a 60 Minutes interview with Anderson Cooper, Harari repeated this idea: “What we have seen so far is corporations and governments collecting data about where we go, who we meet, what movies we watch.

The next phase is the surveillance going under our skin … He likewise told India Today, when commenting on changes accepted by the population during COVID-19:

“We now see mass surveillance systems established even in democratic countries which previously rejected them, and we also see a change in the nature of surveillance. Previously, surveillance was mainly above the skin; now we want it under the skin.

“Governments want to know not just where we go or who we meet. They want to know what’s happening under our skin: what is our body temperature; what is our blood pressure; what is our medical condition?”

Harari is clearly a man who wants to … get under your skin. He just might succeed.

Another recent interview finds him waxing philosophical:

“Now humans are developing even bigger powers than ever before. We are really acquiring divine powers of creation and destruction. We are really upgrading humans into gods. We are acquiring, for instance, the power to re-engineer human life.”

As Kierkegaard once said of Hegel when he talks about the Absolute, when Harari talks about the future, he sounds like he’s going up in a balloon.

Forgive me, but a few last nuggets from professor Harari will round out the picture of his philosophy, and his lofty hopes and dreams:

Humans are now hackable animals. You know, the whole idea that humans have this soul or spirit, and they have free will and nobody knows what’s happening inside me, so, whatever I choose, whether in the election or in the supermarket, that’s my free will — that’s over.”

Harari explains that to hack human being, you need a lot of computing power and a lot of biometric data, which was not possible until recently with the advent of AI.

Keep reading

New York Is Quietly Rolling Out Precrime Surveillance Tech

Picture this: it’s rush hour in New York City. A guy in a Mets cap mutters to himself on the F train platform, pacing in tight circles. Nearby, a woman checks her phone five times in ten seconds. Overhead, cameras are watching. Behind the cameras? A machine. And behind that machine? An army of bureaucrats who’ve convinced themselves that bad vibes are now a crime category.

Welcome to the MTA’s shiny new plan for keeping you safe: an AI surveillance system designed to detect “irrational or concerning conduct” before anything happens. Not after a crime. Not even during. Before. The sort of thing that, in less tech-horny times, might’ve been called “having a bad day.”

MTA Chief Security Officer Michael Kemper, the man standing between us and a future where talking to yourself means a visit from the NYPD, is calling it “predictive prevention.”

“AI is the future,” Kemper assured the MTA’s safety committee.

So far, the MTA insists this isn’t about watching you, per se. It’s watching your behavior. Aaron Donovan, MTA spokesperson and professional splitter of hairs, clarified: “The technology being explored by the MTA is designed to identify behaviors, not people.”

And don’t worry about facial recognition, they say. That’s off the table. For now. Just ignore the dozens of vendors currently salivating over multimillion-dollar public contracts to install “emotion detection” software that’s about as accurate as your aunt’s horoscope app.

Keep reading