Florida Rejects Controversial Encryption Backdoor Bill

Legislators in the US state of Florida have shot down a bid to introduce a law that would have mandated encryption backdoors.

The outcome of the effort – known as SB 868: Social Media Use by Minors – means that the backdoors would have allowed encryption to be weakened in this fundamental way affecting all platforms where minors might choose to open an account.

As the fear-mongering campaign against encryption is being reiterated over and over again, it’s worth repeating – there is no known way of undermining encryption for any one category of users, without leaving the entire internet open and at the mercy of anything from government spies, to plain criminals.

And that affects both people’s communications and transactions.

Not to mention that while framing such radical proposals as needed for a declaratively equally large goal to achieve – the safety of youth online – in reality, by shuttering encryption, young people and everyone else are negatively affected.

If anything, it would make everyone online less secure, and, by nature of the world –  young people more so than others.

And so, Florida’s Senate on announced that SB 868 is now “indefinitely postponed and withdrawn from consideration.”

The idea behind the proposal was to allow law enforcement access to communications on a social platform – by forcing a company to build in backdoors any time law enforcement came up either with a warrant – or merely a subpoena.

Keep reading

Welcome to the age of paranoia as deepfakes and scams abound

These days, when Nicole Yelland receives a meeting request from someone she doesn’t already know, she conducts a multistep background check before deciding whether to accept. Yelland, who works in public relations for a Detroit-based nonprofit, says she’ll run the person’s information through Spokeo, a personal data aggregator that she pays a monthly subscription fee to use. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams call—with their camera on.

If Yelland sounds paranoid, that’s because she is. In January, before she started her current nonprofit role, Yelland says, she got roped into an elaborate scam targeting job seekers. “Now, I do the whole verification rigamarole any time someone reaches out to me,” she tells WIRED.

Digital imposter scams aren’t new; messaging platforms, social media sites, and dating apps have long been rife with fakery. In a time when remote work and distributed teams have become commonplace, professional communications channels are no longer safe, either. The same artificial intelligence tools that tech companies promise will boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds.

On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.

Keep reading

Nations meet at UN for ‘killer robot’ talks as regulation lags

Countries are meeting at the United Nations on Monday to revive efforts to regulate the kinds of AI-controlled autonomous weapons increasingly used in modern warfare, as experts warn time is running out to put guardrails on new lethal technology.

Autonomous and artificial intelligence-assisted weapons systems are already playing a greater role in conflicts from Ukraine to Gaza. And rising defence spending worldwide promises to provide a further boost for burgeoning AI-assisted military technology.

Progress towards establishing global rules governing their development and use, however, has not kept pace. And internationally binding standards remain virtually non-existent.

Since 2014, countries that are part of the Convention on Conventional Weapons (CCW) have been meeting in Geneva to discuss a potential ban fully autonomous systems that operate without meaningful human control and regulate others.

U.N. Secretary-General Antonio Guterres has set a 2026 deadline for states to establish clear rules on AI weapon use. But human rights groups warn that consensus among governments is lacking.

Alexander Kmentt, head of arms control at Austria’s foreign ministry, said that must quickly change.

“Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don’t come to pass,” he told Reuters.

Monday’s gathering of the U.N. General Assembly in New York will be the body’s first meeting dedicated to autonomous weapons.

Though not legally binding, diplomatic officials want the consultations to ramp up pressure on military powers that are resisting regulation due to concerns the rules could dull the technology’s battlefield advantages.

Campaign groups hope the meeting, which will also address critical issues not covered by the CCW, including ethical and human rights concerns and the use of autonomous weapons by non-state actors, will push states to agree on a legal instrument.

They view it as a crucial litmus test on whether countries are able to bridge divisions ahead of the next round of CCW talks in September.

“This issue needs clarification through a legally binding treaty. The technology is moving so fast,” said Patrick Wilcken, Amnesty International’s Researcher on Military, Security and Policing.

“The idea that you wouldn’t want to rule out the delegation of life or death decisions … to a machine seems extraordinary.”

Keep reading

FDA To Deploy Artificial Intelligence Across Agency

The Food and Drug Administration (FDA) is planning to roll out artificial intelligence across the agency following a successful pilot program.

FDA Commissioner Dr. Marty Makary has directed all FDA centers to immediately start using artificial intelligence (AI) and fully integrate it by the end of June, the FDA said in a May 8 statement.

By June 30, all centers will use what the agency described as a “common, secure generative AI system integrated with FDA’s internal data platforms.”

“I was blown away by the success of our first AI-assisted scientific review pilot,“ Makary said in a statement. ”We need to value our scientists’ time and reduce the amount of non-productive busywork that has historically consumed much of the review process. The agency-wide deployment of these capabilities holds tremendous promise in accelerating the review time for new therapies.”

AI refers to computer systems that can carry out complex tasks typically performed by humans.

“AI can be generally described as a branch of computer science, statistics, and engineering that uses algorithms or models to perform tasks and exhibit behaviors such as learning, making decisions, and making predictions,” FDA officials said in 2023.

Makary said on Thursday that past years have featured discussions on utilizing AI and that it’s time to start using it to save time, with some tasks that once took days now taking mere minutes.

“We at the FDA now have to ask big questions that we’ve never asked before. Why does it take over 10 years for a new drug to come to market? Why are we not modernized with AI and other things? We’ve just completed our first AI-assisted scientific review for a product and that’s just the beginning,” he said earlier in the week on social media platform X.

The rollout across the FDA is being coordinated by Jeremy Walsh, Booz Allen Hamilton’s former chief technologist, who was recently appointed as the agency’s chief AI officer, and Sridhar Mantha, who previously led the Office of Business Informatics at the FDA’s Center for Drug Evaluation and Research.

Keep reading

Pope Leo Warns Over AI As MIT Researcher Finds 90% Probability Of ‘Existential Threat’

In his first formal audience as the newly elected pontiff, Pope Leo XIV identified artificial intelligence (AI) as one of the most critical matters facing humanity.

“In our own day,” Pope Leo declared, “the church offers everyone the treasury of its social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.” He linked this statement to the legacy of his namesake Leo XIII’s 1891 encyclical Rerum Novarum, which addressed workers’ rights and the moral dimensions of capitalism.

His remarks continued the direction charted by the late Pope Francis, who warned in his 2024 annual peace message that AI – lacking human values of compassion, mercy, morality and forgiveness – is too perilous to develop unchecked. Francis, who passed away on April 21, had called for an international treaty to regulate AI and insisted that the technology must remain “human-centric,” particularly in applications involving weapon systems or tools of governance.

Keep reading

The Responsible Lie: How AI Sells Conviction Without Truth

The widespread excitement around generative AI, particularly large language models (LLMs) like ChatGPT, Gemini, Grok, and DeepSeek, is built on a fundamental misunderstanding. While these systems impress users with articulate responses and seemingly reasoned arguments, the truth is that what appears to be “reasoning” is nothing more than a sophisticated form of mimicry.

These models aren’t searching for truth through facts and logical arguments—they’re predicting text based on patterns in the vast datasets they’re “trained” on. That’s not intelligence—and it isn’t reasoning. And if their “training” data is itself biased, then we’ve got real problems.

I’m sure it will surprise eager AI users to learn that the architecture at the core of LLMs is fuzzy—and incompatible with structured logic or causality. The thinking isn’t real, it’s simulated, and is not even sequential. What people mistake for understanding is actually statistical association.

Much-hyped new features like “chain-of-thought” explanations are tricks designed to impress the user. What users are actually seeing is best described as a kind of rationalization generated after the model has already arrived at its answer via probabilistic prediction. The illusion, however, is powerful enough to make users believe the machine is engaging in genuine deliberation. And this illusion does more than just mislead—it justifies.

LLMs are not neutral tools, they are trained on datasets steeped in the biases, fallacies, and dominant ideologies of our time. Their outputs reflect prevailing or popular sentiments, not the best attempt at truth-finding. If popular sentiment on a given subject leans in one direction, politically, then the AI’s answers are likely to do so as well. And when “reasoning” is just an after-the-fact justification of whatever the model has already decided, it becomes a powerful propaganda device.

There is no shortage of evidence for this.

A recent conversation I initiated with DeepSeek about systemic racism, later uploaded back to the chatbot for self-critique, revealed the model committing (and recognizing!) a barrage of logical fallacies, which were seeded with totally made-up studies and numbers. When challenged, the AI euphemistically termed one of its lies a “hypothetical composite.” When further pressed, DeepSeek apologized for another “misstep,” then adjusted its tactics to match the competence of the opposing argument. This is not a pursuit of accuracy—it’s an exercise in persuasion.

similar debate with Google’s Gemini—the model that became notorious for being laughably woke—involved similar persuasive argumentation. At the end, the model euphemistically acknowledged its argument’s weakness and tacitly confessed its dishonesty.

Keep reading

Man Killed in Road Rage Incident Brought Back to Life and Speaks to His Killer in Court via AI-Powered Video

A Chandler, Arizona courtroom recently witnessed the use of artificial intelligence to resurrect the voice of a deceased man, Christopher Pelkey, during the sentencing of his killer, Gabriel Horcasitas.

Pelkey, shot dead in a 2021 road rage incident, was digitally recreated using AI to deliver a scripted “impact statement” written by his sister, Stacey Wales.

Horcasitas, 50, was sentenced to 10.5 years for manslaughter after shooting Pelkey, 37, when the latter approached his car in a heated confrontation.

“To Gabriel Horcasitas, the man who shot me: it is a shame we encountered each other that day in those circumstances,” according to the AI-generated video, featuring a digitally aged photograph of Pelkey.

“In another life, we probably could have been friends. I believe in forgiveness and God, who forgives. I always have, and still do,” the video said, which moved the judge, who openly praised the AI technology, to hand down a sentence a full year longer than prosecutors requested.

Keep reading

Palantir Partners with NATO on Controversial AI Project Maven

In late March, the North American Treaty Organization (NATO) Communications and Information Agency (NCIA) and Palantir Technologies Inc. announced a new agreement for the Palantir Maven Smart System NATO (MSS NATO) to be deployed by NATO’s Allied Command Operations (ACO).

Palantir was co-founded by Alex Karp and Peter Thiel, the Technocrat-Zionists who also happen to be Steering Committee members of the Bilderberg Group.

The new deal will see NCIA partner with Supreme Headquarters Allied Powers Europe (SHAPE) and Palantir to deliver Maven Smart System NATO to the Warfighter. A move which NCIA general manger Ludwig Decamps says will provide “customized state-of-the-art AI capabilities to the Alliance” and allow NATO to “operate effectively and decisively.”

The Maven Smart System (MSS) uses AI-generated algorithms and memory learning capabilities to scan and identify enemy systems.

Palantir’s MSS NATO makes use of “cutting-edge artificial intelligence (AI)” for core military operations, including using large language models (LLMs) for generative and machine learning.

Shon Manasco, Senior Counselor at Palantir Technologies, said the arrangement with NCIA and SHAPE will “bolster deterrence by deploying an AI-enabled warfighting platform”.

General Markus Laubenthal, SHAPE Chief of Staff, said Maven will allow NATO to be “more agile, adaptable, and responsive to emerging threats”.

SHAPE is the headquarters and commander of NATO’s ACO based near Mons, Belgium.

Keep reading

DARPA-Funded Scientists Patent Sound-Triggered Drug Implant System

What’s New: Scientists backed by the Defense Advanced Research Projects Agency (DARPA) have developed—and patented—a sound-triggered drug delivery system that can be remotely activated inside the human body. While pitched as a medical breakthrough, the technology has clear weaponization potential and fits squarely into the dual-use domain of military and civilian biotech. It’s protected in over 150 countries—and explicitly tied to the U.S. military’s research arm.

Why It Matters: DARPA isn’t just funding experimental medicine. It holds rights to a globally patented implant that responds to external soundwaves. Under the Bayh-Dole Act, the government can license, use, or compel use of inventions it funds—even in commercial settings. This raises serious biosecurity questions as wireless, body-embedded systems move from lab to market.

  • The tech was described in a DARPA-funded March 2022 paper authored by researchers from Columbia University, MIT, and George Mason University.
  • Patent filings confirm Columbia submitted the invention to the U.S. patent system before publishing the study. A U.S. provisional patent was filed on September 29, 2021, followed by an international patent application (PCT/US2022/077135) on September 28, 2022.
  • The patent (WO2023/107765) is active as of March 2025.
  • Named inventors on the patent are the same as the lead authors on the paper: Samuel Sia, Rachel D. Field, and Margaret A. Jakus.
  • The patent states: “This invention was made with government support under D20AC00004 awarded by the Defense Advanced Research Projects Agency (DARPA). The government has certain rights in the invention.”

Keep reading

Artificial General Intelligence (AGI): Can It Really Think Like A Human?

What is AGI?

When the lines blur between man and machine, you’re looking at artificial general intelligence (AGI). Unlike its counterpart, artificial narrow intelligence (ANI), which is the use of AI for solving individual problem statements, AGI represents artificial intelligence that can understand, learn and apply knowledge in a way that is indistinguishable from human cognition.

AGI is still theoretical, but the prospect of artificial intelligence being able to holistically replace human input and judgment has naturally attracted plenty of interest, with researchers, technologists and academics alike seeking to bring the concept of AGI to reality. 

Yet another strand of prevailing research seeks to explore the feasibility and implications of AGI vs. ANI in a world increasingly shaped by AI capabilities. 

Indeed, while ANI has already transformed various industries, AGI’s potential goes far beyond. Imagine a world where machines can not only assist humans in their tasks but also proactively understand the drivers behind specific tasks, predict outcomes, and autonomously create innovative solutions to achieve optimal results. This paradigm shift could revolutionize healthcare, education, transportation and countless other fields.

Keep reading