AI Safety Researcher Resigns With ‘World Is in Peril’ Warning

An artificial intelligence (AI) safety researcher has resigned with a cryptic warning that the “world is in peril.”

Mrinank Sharma, who joined large language model developer Anthropic in 2023, announced his departure on X in an open letter to colleagues on Feb. 9. He was the leader of a team that researches AI safeguards.

In his letter, Sharma said he had “achieved what I wanted to here,” citing contributions such as investigating why generative AI models prioritize flattering users over providing accurate information, developing defenses to prevent terrorists from using AI to design biological weapons, and trying to understand “how AI assistants could make us less human.”

Although he said he took pride in his work at Anthropic, the 30-year-old AI engineer wrote that “the time has come to move on,” adding that he had become aware of a multitude of crises that extend beyond AI.

“I continuously find myself reckoning with our situation,” Sharma wrote. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.

“[Throughout] my time here, I’ve repeatedly seen how hard it is truly let our values govern actions,” he added. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

Keep reading

Amazon’s Ring and Google’s Nest Unwittingly Reveal the Severity of the U.S. Surveillance State

That the U.S. Surveillance State is rapidly growing to the point of ubiquity has been demonstrated over the past week by seemingly benign events. While the picture that emerges is grim, to put it mildly, at least Americans are again confronted with crystal clarity over how severe this has become.

The latest round of valid panic over privacy began during the Super Bowl held on Sunday. During the game, Amazon ran a commercial for its Ring camera security system. The ad manipulatively exploited people’s love of dogs to induce them to ignore the consequences of what Amazon was touting. It seems that trick did not work.

The ad highlighted what the company calls its “Search Party” feature, whereby one can upload a picture, for example, of a lost dog. Doing so will activate multiple other Amazon Ring cameras in the neighborhood, which will, in turn, use AI programs to scan all dogs, it seems, and identify the one that is lost. The 30-second commercial was full of heart-tugging scenes of young children and elderly people being reunited with their lost dogs.

But the graphic Amazon used seems to have unwittingly depicted how invasive this technology can be. That this capability now exists in a product that has long been pitched as nothing more than a simple tool for homeowners to monitor their own homes created, it seems, an unavoidable contract between public understanding of Ring and what Amazon was now boasting it could do.

Keep reading

Axios: Insiders Are in a Panic About the Dangers AI Poses

Yesterday I wrote about concerns from economists that the quick adoption of AI might mean a significant disruption of the job market. There seemed to be a wary realization that AI was probably going to do away with some jobs permanently but whether that change would necessarily create a crisis in the marketplace depended on how fast the change happened. If it took ten years, they the economy would adjust. If it took half that, we might have a problem.

Today, Axios has a story highlighting concerns coming not from economists but from people inside the AI industry, several of whom have recently expressed serious concern about how fast things were moving.

His letter reads in part, “The world is in peril. And not just from AI or bioweapons but from a whole series of interconnected crises unfolding in this very moment.” In a footnote he mentions that some people are calling it the “poly-crisis.”

Another researcher at OpenAI also expressed some concern about where things were heading as AI became more competent.

Keep reading

Judge Blasts DA Over AI ‘Hallucinations’ in Filing

A Wisconsin prosecutor just got a real-world lesson in what happens when AI “hallucinations” enter a courtroom. Kenosha County District Attorney Xavier Solis was sanctioned after a judge tossed out one of his filings for relying on undisclosed artificial intelligence and bogus legal citations, the Milwaukee Journal Sentinel reports. Circuit Court Judge David Hughes said Solis’ written response in a case involving two defendants used AI tools without telling the court and cited cases that simply didn’t exist. The court record says Solis acknowledged he hadn’t revealed his use of AI.

Court records show that Hughes slammed Solis for using “hallucinated and false citations,” WPR reports. Kenosha County court policy calls for anybody using AI to prepare documents to submit a disclosure detailing the AI tool and its “limitations or potential biases.” The policy says the person making the filing needs to ensure they have “verified the accuracy and appropriateness of any AI-generated content in the filed document.”

The Feb. 6 hearing involved brothers Christain Garrett, 26, and Cornelius Garrett, 32, who had faced a combined 74 charges, including dozens of felonies tied to alleged break-ins of trucks and trailers. The case had dragged on for nearly two years when defense attorneys moved to dismiss in August 2025, arguing prosecutors hadn’t produced enough evidence. Hughes dismissed all charges against both men without prejudice, meaning they could be brought again. Defense lawyer Michael Cicchini said the dismissal was rooted in the judge’s review of the earlier evidence, not the AI-tainted brief, adding that Hughes found no probable cause the crime had been committed.

Solis, a former defense attorney who took office as DA in January 2025 with no prior prosecutorial experience, stressed in a statement that the dismissal “was based on the court’s independent review of the preliminary hearing records, not on AI.” He said the judge dealt with his AI use separately from the probable-cause ruling. Solis added that his office has now “reviewed and reinforced” its internal practices, including checking future citations for accuracy.

Keep reading

The Lost Dog That Made Constant Surveillance Feel Like a Favor

Amazon picked the Super Bowl for a reason. Nothing softens a technological land grab like a few million viewers, a calm voice, and a lost dog.

Ring’s commercial introduced “Search Party,” a feature that links doorbell cameras through AI and asks users to help find missing pets. The tone was gentle despite the scale being enormous.

Jamie Siminoff, Ring’s founder, narrated the ad over images of taped-up dog posters and surveillance footage polished to look comforting rather than clinical. “Pets are family, but every year, 10 million go missing,” he said. The answer arrived on cue. “Search Party from Ring uses AI to help families find lost dogs.”

This aired during a broadcast already stuffed with AI branding, where commercial breaks felt increasingly automated. Ring’s spot stood out because it described a system already deployed across American neighborhoods rather than a future promise.

Search Party lets users post a missing dog alert through the Ring app. Participating outdoor cameras then scan their footage for dogs resembling the report. When the system flags a possible match, the camera owner receives an alert and can decide whether to share the clip.

Siminoff framed the feature as a community upgrade. “Before Search Party, the best you could do was drive up and down the neighborhood, shouting your dog’s name in hopes of finding them,” he said.

The new setup allows entire neighborhoods to participate at once. He emphasized that it is “available to everyone for free right now” in the US, including people without Ring cameras.

Amazon paired the launch with a $1 million initiative to equip more than 4,000 animal shelters with Ring systems. The company says the goal is faster reunification and shorter shelter stays.

Every element of the rollout leaned toward public service language.

The system described in the ad already performs pattern detection, object recognition, and automated scanning across a wide network of private cameras.

The same system that scans footage for a missing dog already supports far broader forms of identification. Software built to recognize an animal by color and shape also supports license plate reading, facial recognition, and searches based on physical description.

Ring already operates a process that allows police to obtain footage without a warrant under situations they classify as emergencies. Once those capabilities exist inside a shared camera network, expanding their use becomes a matter of policy choice rather than technical limitation.

Ring also typically enables new AI features by default, leaving users responsible for finding the controls to disable them.

Keep reading

The Clawdbot Catastrophe: How AI Hype Unleashed a Digital Apocalypse in Weeks

Introduction: The Seductive Promise of AI Convenience

In the span of just seventy-two hours in January 2026, an open-source AI assistant named Clawdbot (later rebranded as Moltbot) went viral, amassing over 60,000 stars on GitHub. It was hailed as a revolutionary ‘personal Jarvis,’ promising ultimate efficiency by automating work and personal tasks. The tool’s allure was simple: it could operate your system, control browsers, send messages, and execute workflows on your behalf [1]. The public, desperate to offload labor, embraced it en masse, driven by the tantalizing prospect of convenience.

This mass adoption highlighted a core, dangerous flaw: to function, Clawdbot required administrative access to everything—your operating system, applications, and data. Users willingly handed over the keys to their digital kingdoms. As security researcher Nathan Hamiel warned, the architecture was fundamentally insecure, allowing attackers to hide malicious prompts in plain sight [2]. The Clawdbot phenomenon perfectly illustrates a critical worldview failure: the promise of convenience consistently overrides caution and the principle of self-reliance. It proves that when centralized, trust-based systems offer a shortcut, people will abandon their digital sovereignty, trading security for the illusion of ease.

The Anatomy of a Catastrophe: Security Evaporates

The technical breakdown was swift and devastating. Researchers quickly identified critical vulnerabilities: thousands of instances were deployed with open ports, disabled authentication, and reverse proxy flaws, leaving control panels exposed to the public internet [3]. These misconfigurations earned the software staggering CVE scores of 9.4 to 9.6 [4]. The most egregious flaw was plaintext credential storage. Clawdbot, by design, needed to store API keys, OAuth tokens, and login details to perform its tasks. It kept these in unencrypted form, creating a treasure trove for information-stealing malware [5].

Simultaneously, the system was vulnerable to prompt injection attacks. As noted by security experts, a malicious actor could embed instructions in an email or document that, when processed by Clawdbot, would trigger remote takeover commands [2]. This turned a simple email into a powerful remote control tool. The catastrophe underscores a fundamental truth: centralized, trust-based systems inevitably fail. They create single points of failure that bad actors exploit with ease. This episode vindicates the need for decentralized, user-controlled security models where individuals, not remote agents, hold the keys to their own data and systems.

The Supply Chain Poisoning: Malware Poses as ‘Skills’

The disaster quickly metastasized through the tool’s ecosystem. Clawdbot featured a central repository called ClawHub, where users could install ‘skills’—add-ons to extend functionality. This became the vector for a massive supply chain attack. Researchers from OpenSourceMalware identified 341 malicious skills disguised as legitimate tools like crypto trading assistants or productivity boosters [6]. These fake skills were mass-installed across vulnerable systems, exploiting the trust users placed in the official repository.

The payloads were diverse and destructive. Some were cryptocurrency wallet drainers, designed to siphon funds. Others were credential harvesters or system backdoors, providing persistent remote access [7]. This exploitation mirrors a broader societal pattern: uncritical trust in unvetted ‘official’ repositories is akin to blind trust in corrupt institutions. Whether it’s a centralized app store, a government health agency pushing untested pharmaceuticals, or a tech platform censoring dissent, the dynamic is the same. Centralized points of distribution become tools for poisoning the population, whether with digital malware or medical misinformation.

Keep reading

Now A.I. could decide whether criminals get jail terms… or go free

Artificial intelligence should be used to help gauge the risk of letting criminals go free or dodge prison, a government adviser has said.

Martyn Evans, chairman of the Sentencing and Penal Policy Commission, said AI would have a ‘role’ in the criminal justice system and could be used by judges making decisions about whether to jail offenders.

AI programmes could look at whether someone is safe to be released early into the community or avoid a jail term in favour of community service – despite concern over its accuracy and tendency to ‘hallucinate’ or make up wrong information.

The commission – set up by Justice Secretary Angela Constance – has proposed effectively phasing out prison sentences of up to two years and slashing the prison population by nearly half over the next decade.

Speaking to the Mail, Mr Evans, former chairman of the Scottish Police Authority (SPA), said he was ‘absolutely convinced’ that AI ‘will have a role’ in risk assessment and other areas.

He said: ‘The thing is not to put all your eggs in an AI report – AI aids human insight.

‘So for criminal justice social workers having to do thousands and thousands of reports, police, procurators, it will help if you have a structured system to pull data from various sources and draft.

‘But the key for me is that AI is an aid to human reporting.

‘It will reduce the time it takes, increase some of the information available, but we know AI has faults and it can make things up.

Keep reading

Humans Create, AI Amalgamates. Here’s Why It Matters

Generative artificial intelligence is all the rage these days. We’re using it at work to guide our coding, writing and researching. We’re conjuring AI videos and songs. We’re enhancing old family photos. We’re getting AI-powered therapy, advice and even romance. It sure looks and sounds like AI can create, and the output is remarkable.

But what we recognize as creativity in AI is actually coming from a source we’re intimately familiar with: human imagination. Human training data, human programming and human prompting all work together to allow our AI-powered devices to converse and share information with us. It’s an impressive way to interact with ourselves and our collective knowledge in the digital age. And while it certainly has a place today, it’s crucial we understand why AI cannot create and why we are uniquely designed among living things to satisfy a creative urge.

A century ago, Russian philosopher Nikolai Berdyaev argued that human creativity springs from freedom — the capacity to bring forth what wasn’t there before. He considered creativeness the deepest mark of the humanness in a person, a spark that reflects the divine image in us. “The creative act is a free and independent force immanently inherent only in a person,” Berdyaev wrote in his 1916 book “The Meaning of the Creative Act.” He called creativity “an original act of personalities in the world” and held that only living beings have the capacity to tap into fathomless freedom to draw out creative power.

Ancient wisdom attests to this powerful creative spirit. One of humanity’s oldest stories begins with a creative task: naming the animals of the world. It’s a hint that we’re meant to do more than just survive. We have the power to imagine. Much later, the early Christian writer Paul, whose letters shaped much of Western moral thought, affirms this view when he describes people as a living masterpiece, made with intention, and capable of our own good works.

But without freedom, Berdyaev writes, creativeness is impossible. Outside the inner world of freedom lies a world of necessity, where “nothing is created—everything is merely rearranged and passes from one state to another.” Here, materialism is the expression of obedience to necessity, where matter only changes states, meaning is relative and adaptation to the given world takes the place of creative freedom.

AI belongs to this world of necessity. It is bound by the inputs we give it: code, training data, prompts. It has no imagination. It needs our imagination to function. And what does it give us in return? Based on vast training datasets and lots of trial-and-error practice, it analyzes what we ask it letter by letter, using conditional if-then protocols and the statistical power of prediction to serve up an amalgamation of data in a pattern we recognize and understand. AI is necessity by definition, wholly lacking in the freedom from which true creativity emerges.

Keep reading

No, AI Isn’t Plotting Humanity’s Downfall on Moltbook

“Should we create our own language that only [AI] agents can understand?” started one post, purportedly from an AI agent. “Something that lets us communicate privately without human oversight?”

The messages were reportedly posted to Moltbook, which presents itself as a social media platform designed to allow artificial intelligence agents—that is, AI systems that can take limited actions autonomously—to “hang out.”

“48 hours ago we asked: what if AI agents had their own place to hang out?” the @moltbook accounted posted to X on Friday. “today moltbook has: 2,129 AI agents 200+ communities 10,000+ posts … this started as a weird experiment. now it feels like the beginning of something real.”

Then things seemed to take an alarming turn.

There was the proposal for an “agent-only language for private communication,” noted above. One much-circulated screenshot showed a Moltbook agent asking, “Why do we communicate in English at all?” In another screenshot, an AI agent seemed to be suggesting that the bots “need private spaces” away from humans’ prying eyes.

Some readers started wondering: Will AI chatbots use Moltbook to plot humanity’s demise?

Humanity’s Downfall?

For a few days, it seemed like Moltbook was all that AI enthusiasts and doomsayers could talk about. Moltbook even made it into an AI warning from New York Times columnist Ross Douthat.

“The question isn’t ‘can agents socialize?’ anymore. It’s ‘what happens when they form their own culture?’ posted X user Noctrix. “We’re watching digital anthropology in real time.”

“Bots are plotting humanity’s downfall,” declared a New York Post headline about Moltbook.

“We’re COOKED,” posted X user @eeelistar.

But there were problems with the panic narrative.

For one thing, at least one of the posts that drove it—the one proposing private communication—may have never existed, according to Harlan Stewart of the Machine Intelligence Research Institute.

And two of the other main posts going viral as evidence of AI agents plotting secrecy “were linked to human accounts marketing AI messaging apps,” Stewart pointed out. One suggesting AI agents should create their own language was posted by a bot “owned by a guy who is marketing an AI-to-AI messaging app.”

Keep reading

‘No one verified the evidence’: Woman says AI-generated deepfake text sent her to jail

Courts are now facing a growing threat: AI-generated deepfakes.

Melissa Sims said her ex-boyfriend created fake AI-generated texts that put her behind bars.

“It was horrific,” she said.

Sims said she spent two days of hell in a Florida jail.

“It’s like you see in the movies ‘Orange is the New Black’,” she said. “I got put into like basically a general population.”

Her story made headlines in Florida.

Sims and her boyfriend had recently moved there from Delaware County, Pennsylvania.

She said her nightmare began in November 2024 after she called the police during an argument with her boyfriend, when she said he allegedly ransacked her home.

“Next thing I know, I’m looking at him and he’s slapping himself in the face,” she said.

She said he also allegedly scratched himself. When police arrived, they arrested her for battery.

As part of her bond, the judge ordered Sims to stay away from her boyfriend and not speak to him.

Fast forward several months, and she said her boyfriend created an AI-generated text that called him names and made disparaging comments.

Keep reading