Judge Blasts DA Over AI ‘Hallucinations’ in Filing

A Wisconsin prosecutor just got a real-world lesson in what happens when AI “hallucinations” enter a courtroom. Kenosha County District Attorney Xavier Solis was sanctioned after a judge tossed out one of his filings for relying on undisclosed artificial intelligence and bogus legal citations, the Milwaukee Journal Sentinel reports. Circuit Court Judge David Hughes said Solis’ written response in a case involving two defendants used AI tools without telling the court and cited cases that simply didn’t exist. The court record says Solis acknowledged he hadn’t revealed his use of AI.

Court records show that Hughes slammed Solis for using “hallucinated and false citations,” WPR reports. Kenosha County court policy calls for anybody using AI to prepare documents to submit a disclosure detailing the AI tool and its “limitations or potential biases.” The policy says the person making the filing needs to ensure they have “verified the accuracy and appropriateness of any AI-generated content in the filed document.”

The Feb. 6 hearing involved brothers Christain Garrett, 26, and Cornelius Garrett, 32, who had faced a combined 74 charges, including dozens of felonies tied to alleged break-ins of trucks and trailers. The case had dragged on for nearly two years when defense attorneys moved to dismiss in August 2025, arguing prosecutors hadn’t produced enough evidence. Hughes dismissed all charges against both men without prejudice, meaning they could be brought again. Defense lawyer Michael Cicchini said the dismissal was rooted in the judge’s review of the earlier evidence, not the AI-tainted brief, adding that Hughes found no probable cause the crime had been committed.

Solis, a former defense attorney who took office as DA in January 2025 with no prior prosecutorial experience, stressed in a statement that the dismissal “was based on the court’s independent review of the preliminary hearing records, not on AI.” He said the judge dealt with his AI use separately from the probable-cause ruling. Solis added that his office has now “reviewed and reinforced” its internal practices, including checking future citations for accuracy.

Keep reading

The Lost Dog That Made Constant Surveillance Feel Like a Favor

Amazon picked the Super Bowl for a reason. Nothing softens a technological land grab like a few million viewers, a calm voice, and a lost dog.

Ring’s commercial introduced “Search Party,” a feature that links doorbell cameras through AI and asks users to help find missing pets. The tone was gentle despite the scale being enormous.

Jamie Siminoff, Ring’s founder, narrated the ad over images of taped-up dog posters and surveillance footage polished to look comforting rather than clinical. “Pets are family, but every year, 10 million go missing,” he said. The answer arrived on cue. “Search Party from Ring uses AI to help families find lost dogs.”

This aired during a broadcast already stuffed with AI branding, where commercial breaks felt increasingly automated. Ring’s spot stood out because it described a system already deployed across American neighborhoods rather than a future promise.

Search Party lets users post a missing dog alert through the Ring app. Participating outdoor cameras then scan their footage for dogs resembling the report. When the system flags a possible match, the camera owner receives an alert and can decide whether to share the clip.

Siminoff framed the feature as a community upgrade. “Before Search Party, the best you could do was drive up and down the neighborhood, shouting your dog’s name in hopes of finding them,” he said.

The new setup allows entire neighborhoods to participate at once. He emphasized that it is “available to everyone for free right now” in the US, including people without Ring cameras.

Amazon paired the launch with a $1 million initiative to equip more than 4,000 animal shelters with Ring systems. The company says the goal is faster reunification and shorter shelter stays.

Every element of the rollout leaned toward public service language.

The system described in the ad already performs pattern detection, object recognition, and automated scanning across a wide network of private cameras.

The same system that scans footage for a missing dog already supports far broader forms of identification. Software built to recognize an animal by color and shape also supports license plate reading, facial recognition, and searches based on physical description.

Ring already operates a process that allows police to obtain footage without a warrant under situations they classify as emergencies. Once those capabilities exist inside a shared camera network, expanding their use becomes a matter of policy choice rather than technical limitation.

Ring also typically enables new AI features by default, leaving users responsible for finding the controls to disable them.

Keep reading

The Clawdbot Catastrophe: How AI Hype Unleashed a Digital Apocalypse in Weeks

Introduction: The Seductive Promise of AI Convenience

In the span of just seventy-two hours in January 2026, an open-source AI assistant named Clawdbot (later rebranded as Moltbot) went viral, amassing over 60,000 stars on GitHub. It was hailed as a revolutionary ‘personal Jarvis,’ promising ultimate efficiency by automating work and personal tasks. The tool’s allure was simple: it could operate your system, control browsers, send messages, and execute workflows on your behalf [1]. The public, desperate to offload labor, embraced it en masse, driven by the tantalizing prospect of convenience.

This mass adoption highlighted a core, dangerous flaw: to function, Clawdbot required administrative access to everything—your operating system, applications, and data. Users willingly handed over the keys to their digital kingdoms. As security researcher Nathan Hamiel warned, the architecture was fundamentally insecure, allowing attackers to hide malicious prompts in plain sight [2]. The Clawdbot phenomenon perfectly illustrates a critical worldview failure: the promise of convenience consistently overrides caution and the principle of self-reliance. It proves that when centralized, trust-based systems offer a shortcut, people will abandon their digital sovereignty, trading security for the illusion of ease.

The Anatomy of a Catastrophe: Security Evaporates

The technical breakdown was swift and devastating. Researchers quickly identified critical vulnerabilities: thousands of instances were deployed with open ports, disabled authentication, and reverse proxy flaws, leaving control panels exposed to the public internet [3]. These misconfigurations earned the software staggering CVE scores of 9.4 to 9.6 [4]. The most egregious flaw was plaintext credential storage. Clawdbot, by design, needed to store API keys, OAuth tokens, and login details to perform its tasks. It kept these in unencrypted form, creating a treasure trove for information-stealing malware [5].

Simultaneously, the system was vulnerable to prompt injection attacks. As noted by security experts, a malicious actor could embed instructions in an email or document that, when processed by Clawdbot, would trigger remote takeover commands [2]. This turned a simple email into a powerful remote control tool. The catastrophe underscores a fundamental truth: centralized, trust-based systems inevitably fail. They create single points of failure that bad actors exploit with ease. This episode vindicates the need for decentralized, user-controlled security models where individuals, not remote agents, hold the keys to their own data and systems.

The Supply Chain Poisoning: Malware Poses as ‘Skills’

The disaster quickly metastasized through the tool’s ecosystem. Clawdbot featured a central repository called ClawHub, where users could install ‘skills’—add-ons to extend functionality. This became the vector for a massive supply chain attack. Researchers from OpenSourceMalware identified 341 malicious skills disguised as legitimate tools like crypto trading assistants or productivity boosters [6]. These fake skills were mass-installed across vulnerable systems, exploiting the trust users placed in the official repository.

The payloads were diverse and destructive. Some were cryptocurrency wallet drainers, designed to siphon funds. Others were credential harvesters or system backdoors, providing persistent remote access [7]. This exploitation mirrors a broader societal pattern: uncritical trust in unvetted ‘official’ repositories is akin to blind trust in corrupt institutions. Whether it’s a centralized app store, a government health agency pushing untested pharmaceuticals, or a tech platform censoring dissent, the dynamic is the same. Centralized points of distribution become tools for poisoning the population, whether with digital malware or medical misinformation.

Keep reading

Now A.I. could decide whether criminals get jail terms… or go free

Artificial intelligence should be used to help gauge the risk of letting criminals go free or dodge prison, a government adviser has said.

Martyn Evans, chairman of the Sentencing and Penal Policy Commission, said AI would have a ‘role’ in the criminal justice system and could be used by judges making decisions about whether to jail offenders.

AI programmes could look at whether someone is safe to be released early into the community or avoid a jail term in favour of community service – despite concern over its accuracy and tendency to ‘hallucinate’ or make up wrong information.

The commission – set up by Justice Secretary Angela Constance – has proposed effectively phasing out prison sentences of up to two years and slashing the prison population by nearly half over the next decade.

Speaking to the Mail, Mr Evans, former chairman of the Scottish Police Authority (SPA), said he was ‘absolutely convinced’ that AI ‘will have a role’ in risk assessment and other areas.

He said: ‘The thing is not to put all your eggs in an AI report – AI aids human insight.

‘So for criminal justice social workers having to do thousands and thousands of reports, police, procurators, it will help if you have a structured system to pull data from various sources and draft.

‘But the key for me is that AI is an aid to human reporting.

‘It will reduce the time it takes, increase some of the information available, but we know AI has faults and it can make things up.

Keep reading

Humans Create, AI Amalgamates. Here’s Why It Matters

Generative artificial intelligence is all the rage these days. We’re using it at work to guide our coding, writing and researching. We’re conjuring AI videos and songs. We’re enhancing old family photos. We’re getting AI-powered therapy, advice and even romance. It sure looks and sounds like AI can create, and the output is remarkable.

But what we recognize as creativity in AI is actually coming from a source we’re intimately familiar with: human imagination. Human training data, human programming and human prompting all work together to allow our AI-powered devices to converse and share information with us. It’s an impressive way to interact with ourselves and our collective knowledge in the digital age. And while it certainly has a place today, it’s crucial we understand why AI cannot create and why we are uniquely designed among living things to satisfy a creative urge.

A century ago, Russian philosopher Nikolai Berdyaev argued that human creativity springs from freedom — the capacity to bring forth what wasn’t there before. He considered creativeness the deepest mark of the humanness in a person, a spark that reflects the divine image in us. “The creative act is a free and independent force immanently inherent only in a person,” Berdyaev wrote in his 1916 book “The Meaning of the Creative Act.” He called creativity “an original act of personalities in the world” and held that only living beings have the capacity to tap into fathomless freedom to draw out creative power.

Ancient wisdom attests to this powerful creative spirit. One of humanity’s oldest stories begins with a creative task: naming the animals of the world. It’s a hint that we’re meant to do more than just survive. We have the power to imagine. Much later, the early Christian writer Paul, whose letters shaped much of Western moral thought, affirms this view when he describes people as a living masterpiece, made with intention, and capable of our own good works.

But without freedom, Berdyaev writes, creativeness is impossible. Outside the inner world of freedom lies a world of necessity, where “nothing is created—everything is merely rearranged and passes from one state to another.” Here, materialism is the expression of obedience to necessity, where matter only changes states, meaning is relative and adaptation to the given world takes the place of creative freedom.

AI belongs to this world of necessity. It is bound by the inputs we give it: code, training data, prompts. It has no imagination. It needs our imagination to function. And what does it give us in return? Based on vast training datasets and lots of trial-and-error practice, it analyzes what we ask it letter by letter, using conditional if-then protocols and the statistical power of prediction to serve up an amalgamation of data in a pattern we recognize and understand. AI is necessity by definition, wholly lacking in the freedom from which true creativity emerges.

Keep reading

No, AI Isn’t Plotting Humanity’s Downfall on Moltbook

“Should we create our own language that only [AI] agents can understand?” started one post, purportedly from an AI agent. “Something that lets us communicate privately without human oversight?”

The messages were reportedly posted to Moltbook, which presents itself as a social media platform designed to allow artificial intelligence agents—that is, AI systems that can take limited actions autonomously—to “hang out.”

“48 hours ago we asked: what if AI agents had their own place to hang out?” the @moltbook accounted posted to X on Friday. “today moltbook has: 2,129 AI agents 200+ communities 10,000+ posts … this started as a weird experiment. now it feels like the beginning of something real.”

Then things seemed to take an alarming turn.

There was the proposal for an “agent-only language for private communication,” noted above. One much-circulated screenshot showed a Moltbook agent asking, “Why do we communicate in English at all?” In another screenshot, an AI agent seemed to be suggesting that the bots “need private spaces” away from humans’ prying eyes.

Some readers started wondering: Will AI chatbots use Moltbook to plot humanity’s demise?

Humanity’s Downfall?

For a few days, it seemed like Moltbook was all that AI enthusiasts and doomsayers could talk about. Moltbook even made it into an AI warning from New York Times columnist Ross Douthat.

“The question isn’t ‘can agents socialize?’ anymore. It’s ‘what happens when they form their own culture?’ posted X user Noctrix. “We’re watching digital anthropology in real time.”

“Bots are plotting humanity’s downfall,” declared a New York Post headline about Moltbook.

“We’re COOKED,” posted X user @eeelistar.

But there were problems with the panic narrative.

For one thing, at least one of the posts that drove it—the one proposing private communication—may have never existed, according to Harlan Stewart of the Machine Intelligence Research Institute.

And two of the other main posts going viral as evidence of AI agents plotting secrecy “were linked to human accounts marketing AI messaging apps,” Stewart pointed out. One suggesting AI agents should create their own language was posted by a bot “owned by a guy who is marketing an AI-to-AI messaging app.”

Keep reading

‘No one verified the evidence’: Woman says AI-generated deepfake text sent her to jail

Courts are now facing a growing threat: AI-generated deepfakes.

Melissa Sims said her ex-boyfriend created fake AI-generated texts that put her behind bars.

“It was horrific,” she said.

Sims said she spent two days of hell in a Florida jail.

“It’s like you see in the movies ‘Orange is the New Black’,” she said. “I got put into like basically a general population.”

Her story made headlines in Florida.

Sims and her boyfriend had recently moved there from Delaware County, Pennsylvania.

She said her nightmare began in November 2024 after she called the police during an argument with her boyfriend, when she said he allegedly ransacked her home.

“Next thing I know, I’m looking at him and he’s slapping himself in the face,” she said.

She said he also allegedly scratched himself. When police arrived, they arrested her for battery.

As part of her bond, the judge ordered Sims to stay away from her boyfriend and not speak to him.

Fast forward several months, and she said her boyfriend created an AI-generated text that called him names and made disparaging comments.

Keep reading

Palantir’s ELITE: Not All Maps Are Meant To Guide Us

Many memorable journeys start with a map. Maps have been around for ages, guiding humanity on its way in grand style. Maps have helped sailors cross oceans, caravans traverse deserts, and armies march into the pages of history. Maps have been staple tools of exploration, survival, and sovereignty. And today? Today, they’re on our devices, and we use them to find literally everything, including the nearest taco truck, coffee shop, and gas station. Yet, today’s maps don’t just show us where we are and where we are going. Increasingly, they also tell someone else the gist of who we are. What does that mean exactly? It means not all maps are made for us. Some maps are made about us. Case in point—the objective of Palantir’s ELITE demands our immediate attention. ELITE is a digital map used by ICE to identify neighborhoods, households, and individuals for targeted enforcement, drawing on data that was never meant to become ammunition.

No, Palantir’s ELITE is not strictly limited to use by U.S. Immigration and Customs Enforcement (ICE), but its primary and reported use is specifically for immigration enforcement. ELITE, which stands for Enhanced Leads Identification & Targeting for Enforcement, is a software tool/app developed by Palantir for ICE to find, classify, and prioritize presumably illegal immigrants for deportation. It was rolled out in late 2025, with reports of use starting in September 2025. Essentially, ELITE is a map that pulls data from across federal systems—including agencies like Medicaid and Health Department information—and uses it to compile dossiers on people, complete with address confidence scores and patterns of residence density. It tells ICE agents where individuals live and how likely they are to be there so that ICE can prioritize “target-rich environments” for raids.

In other words, data that was once siloed for entirely different purposes—health records, public assistance, demographic lists—is now being fused into a single dashboard designed to help federal agents decide where to show up and who to detain. While no one wants criminal illegal aliens freely roaming the streets of our nation, the result of the operation is not “analytics”—it is anticipatory policing dressed as operational efficiency. One might think the scenario sounds like something only seen in dystopian fiction, and others agree. Advocates for freedom have pointed out that ELITE’s model resembles (in unsettling ways) systems designed to anticipate behavior rather than respond to actual wrongdoing. Beyond that, what else could it be used for, and when will that next step begin?

Keep reading

Open-Source AI Models Vulnerable to Criminal Misuse, Researchers Warn

Hackers and other criminals can easily commandeer computers operating open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating security risks and vulnerabilities, researchers said on Thursday.

Hackers could target the computers running the LLMs and direct them to carry out spam operations, phishing content creation or disinformation campaigns, evading platform security protocols, the researchers said.

The research, carried out jointly by cybersecurity companies SentinelOne and Censys over the course of 293 days and shared exclusively with Reuters, offers a new window into the scale of potentially illicit use cases for thousands of open-source LLM deployments.

These include hacking, hate speech and harassment, violent or gore content, personal data theft, scams or fraud, and in some cases child sexual abuse material, the researchers said.

While thousands of open-source LLM variants exist, a significant portion of the LLMs on the internet-accessible hosts are variants of Meta’s Llama, Google DeepMind’s Gemma, and others, according to the researchers. While some of the open-source models include guardrails, the researchers identified hundreds of instances where guardrails were explicitly removed.

AI industry conversations about security controls are “ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne.

Guerrero-Saade likened the situation to an “iceberg” that is not being properly accounted for across the industry and open-source community.

The research analyzed publicly accessible deployments of open-source LLMs deployed through Ollama, a tool that allows people and organizations to run their own versions of various large-language models.

The researchers were able to see system prompts, which are the instructions that dictate how the model behaves, in roughly a quarter of the LLMs they observed. Of those, they determined that 7.5% could potentially enable harmful activity.

Roughly 30% of the hosts observed by the researchers are operating out of China, and about 20% in the U.S.

Rachel Adams, the CEO and founder of the Global Center on AI Governance, said in an email that once open models are released, responsibility for what happens next becomes shared across the ecosystem, including the originating labs.

“Labs are not responsible for every downstream misuse (which are hard to anticipate), but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance, particularly given uneven global enforcement capacity,” Adams said.

A spokesperson for Meta declined to respond to questions about developers’ responsibilities for addressing concerns around downstream abuse of open-source models and how concerns might be reported, but noted the company’s Llama Protection tools for Llama developers, and the company’s Meta Llama Responsible Use Guide.

Microsoft AI Red Team Lead Ram Shankar Siva Kumar said in an email that Microsoft believes open-source models “play an important role” in a variety of areas, but, “at the same time, we are clear-eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards.”

Keep reading

AI Can Match Average Human Creativity—But We Still Hold the Edge Where It Matters Most, New Study Finds

Advances in artificial intelligence have fueled a growing belief that machines are on the verge of matching, or even surpassing, human creativity. Large language models can now write poems, spin short stories, and generate clever wordplay in seconds. To many, these outputs feel creative enough to blur the line between human imagination and machine-generated language.

However, a new large-scale empirical study suggests that while today’s most advanced AI systems can rival the average human on certain creativity measures, they still fall short of the most creative minds—and that gap remains significant.

The research, published in Scientific Reports, offers one of the most comprehensive head-to-head comparisons yet between human creativity and large language models (LLMs).

By benchmarking multiple AI systems against a dataset of 100,000 human participants, the study moves the conversation beyond anecdotes and viral examples, replacing speculation with quantitative evidence.

“Our study shows that some AI systems based on large language models can now outperform average human creativity on well-defined tasks,” co-author and Professor at the University of Montreal, Dr. Karim Jerbi, said in a press release. “This result may be surprising — even unsettling — but our study also highlights an equally important observation: even the best AI systems still fall short of the levels reached by the most creative humans.”

Keep reading