Ring Cancels Flock Safety Integration After Public Backlash

Public backlash has forced Ring to cancel its partnership with Flock Safety, the law enforcement surveillance company whose camera network has reportedly given ICE and other federal agencies access to footage across the country.

Ring announced the cancellation this week, saying the integration never went live.

The company’s statement was careful:

“Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners…The integration never launched, so no Ring customer videos were ever sent to Flock Safety.”

That last sentence is doing a lot of work. Ring users responding to the Flock announcement went further than strongly worded tweets. People smashed cameras. Others announced publicly that they were throwing their devices away. The Amazon-owned company had badly misread the moment.

Flock Safety is a surveillance technology company that operates a nationwide network of AI-powered cameras, primarily known for license plate readers, and sells access to the resulting database of vehicle movements to roughly 5,000 law enforcement agencies across the United States.

The Flock partnership was announced back in October 2025, and you may remember the feature report How Amazon Is Turning Your Neighborhood Into a Police Database, which gave deeper insight into the plans.

It got pushback at the time, but only became a bigger crisis after the recent outrage some cities have shown to ICE enforcement activity, when social media posts claimed Ring was providing a direct pipeline through Flock to ICE.

That specific claim isn’t accurate, since the Flock connection never went live. But Ring’s broader relationship with the police is real and extensive, which gave the fear enough traction to land.

Keep reading

This Is The LOCUST Laser That Reportedly Prompted Closing El Paso’s Airspace

An AeroVironment LOCUST laser directed energy weapon owned by the U.S. Army was central to the chain of events that led to the recent shutdown of airspace around El Paso, Texas, according to Reuters. Though many questions still remain to be answered about how the flight restrictions came to be imposed, LOCUST was designed to respond to exactly the kinds of drones that regularly fly across the southern border from Mexico.

Readers can get caught up on what is known about the clampdown in the skies above El Paso on Wednesday in initial reporting here.

Multiple outlets had already reported yesterday that the use of a laser counter-drone system was a key factor in the Federal Aviation Administration’s (FAA) sudden decision to impose the temporary flight restrictions over El Paso. Reuters‘ report says “two people briefed on the situation” identified the laser system in question as LOCUST. TWZ has reached out to AeroVironment and the U.S. Army for more information. U.S. Northern Command (NORTHCOM), which oversees U.S. military operations in and around the homeland, declined to comment.

Last July, the U.S. military released a picture, seen below, showing Army personnel assigned to Joint Task Force-Southern Border (JTF-SB) conducting sling-load training with a LOCUST mounted on a 4×4 M1301 Infantry Squad Vehicle (ISV) at Fort Bliss. This had prompted some speculation that LOCUST systems might be in use along the U.S. border with Mexico. JTF-SB was established in March 2025 to oversee a surge in U.S. military support to the border security mission. Fort Bliss, situated in El Paso, is a major hub for those operations. It is also home to the 1st Armored Division and a significant number of Army air defense units.

Keep reading

AI Safety Researcher Resigns With ‘World Is in Peril’ Warning

An artificial intelligence (AI) safety researcher has resigned with a cryptic warning that the “world is in peril.”

Mrinank Sharma, who joined large language model developer Anthropic in 2023, announced his departure on X in an open letter to colleagues on Feb. 9. He was the leader of a team that researches AI safeguards.

In his letter, Sharma said he had “achieved what I wanted to here,” citing contributions such as investigating why generative AI models prioritize flattering users over providing accurate information, developing defenses to prevent terrorists from using AI to design biological weapons, and trying to understand “how AI assistants could make us less human.”

Although he said he took pride in his work at Anthropic, the 30-year-old AI engineer wrote that “the time has come to move on,” adding that he had become aware of a multitude of crises that extend beyond AI.

“I continuously find myself reckoning with our situation,” Sharma wrote. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.

“[Throughout] my time here, I’ve repeatedly seen how hard it is truly let our values govern actions,” he added. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

Keep reading

Amazon’s Ring and Google’s Nest Unwittingly Reveal the Severity of the U.S. Surveillance State

That the U.S. Surveillance State is rapidly growing to the point of ubiquity has been demonstrated over the past week by seemingly benign events. While the picture that emerges is grim, to put it mildly, at least Americans are again confronted with crystal clarity over how severe this has become.

The latest round of valid panic over privacy began during the Super Bowl held on Sunday. During the game, Amazon ran a commercial for its Ring camera security system. The ad manipulatively exploited people’s love of dogs to induce them to ignore the consequences of what Amazon was touting. It seems that trick did not work.

The ad highlighted what the company calls its “Search Party” feature, whereby one can upload a picture, for example, of a lost dog. Doing so will activate multiple other Amazon Ring cameras in the neighborhood, which will, in turn, use AI programs to scan all dogs, it seems, and identify the one that is lost. The 30-second commercial was full of heart-tugging scenes of young children and elderly people being reunited with their lost dogs.

But the graphic Amazon used seems to have unwittingly depicted how invasive this technology can be. That this capability now exists in a product that has long been pitched as nothing more than a simple tool for homeowners to monitor their own homes created, it seems, an unavoidable contract between public understanding of Ring and what Amazon was now boasting it could do.

Keep reading

Axios: Insiders Are in a Panic About the Dangers AI Poses

Yesterday I wrote about concerns from economists that the quick adoption of AI might mean a significant disruption of the job market. There seemed to be a wary realization that AI was probably going to do away with some jobs permanently but whether that change would necessarily create a crisis in the marketplace depended on how fast the change happened. If it took ten years, they the economy would adjust. If it took half that, we might have a problem.

Today, Axios has a story highlighting concerns coming not from economists but from people inside the AI industry, several of whom have recently expressed serious concern about how fast things were moving.

His letter reads in part, “The world is in peril. And not just from AI or bioweapons but from a whole series of interconnected crises unfolding in this very moment.” In a footnote he mentions that some people are calling it the “poly-crisis.”

Another researcher at OpenAI also expressed some concern about where things were heading as AI became more competent.

Keep reading

The Lost Dog That Made Constant Surveillance Feel Like a Favor

Amazon picked the Super Bowl for a reason. Nothing softens a technological land grab like a few million viewers, a calm voice, and a lost dog.

Ring’s commercial introduced “Search Party,” a feature that links doorbell cameras through AI and asks users to help find missing pets. The tone was gentle despite the scale being enormous.

Jamie Siminoff, Ring’s founder, narrated the ad over images of taped-up dog posters and surveillance footage polished to look comforting rather than clinical. “Pets are family, but every year, 10 million go missing,” he said. The answer arrived on cue. “Search Party from Ring uses AI to help families find lost dogs.”

This aired during a broadcast already stuffed with AI branding, where commercial breaks felt increasingly automated. Ring’s spot stood out because it described a system already deployed across American neighborhoods rather than a future promise.

Search Party lets users post a missing dog alert through the Ring app. Participating outdoor cameras then scan their footage for dogs resembling the report. When the system flags a possible match, the camera owner receives an alert and can decide whether to share the clip.

Siminoff framed the feature as a community upgrade. “Before Search Party, the best you could do was drive up and down the neighborhood, shouting your dog’s name in hopes of finding them,” he said.

The new setup allows entire neighborhoods to participate at once. He emphasized that it is “available to everyone for free right now” in the US, including people without Ring cameras.

Amazon paired the launch with a $1 million initiative to equip more than 4,000 animal shelters with Ring systems. The company says the goal is faster reunification and shorter shelter stays.

Every element of the rollout leaned toward public service language.

The system described in the ad already performs pattern detection, object recognition, and automated scanning across a wide network of private cameras.

The same system that scans footage for a missing dog already supports far broader forms of identification. Software built to recognize an animal by color and shape also supports license plate reading, facial recognition, and searches based on physical description.

Ring already operates a process that allows police to obtain footage without a warrant under situations they classify as emergencies. Once those capabilities exist inside a shared camera network, expanding their use becomes a matter of policy choice rather than technical limitation.

Ring also typically enables new AI features by default, leaving users responsible for finding the controls to disable them.

Keep reading

Mystery Biotech Explosion Kills 8 in China, Company Legal Rep Arrested

Chinese state media agencies confirmed a massive explosion taking place at a facility owned by a biotechnology company killed at least eight people in Shanxi, northern China this weekend.

Multiple Asian news outlets identified the company involved as Shanyin Jiapeng Bio-Technology, which reportedly manufactures a host of chemicals including agricultural products and paint. None of the reports on the incident indicate any known reason for the explosion, indicating that investigations are still ongoing. The government’s Xinhua News Agency reported that the Communist Party had detained the company’s legal representative, stating that he or she was “placed under control” without any details. It remains unclear at press time why the legal representative, and no other employee of the company, was targeted.

China has a long history of industrial, chemical, and scientific research accidents, as well as corporate misconduct and corruption. Among the various scandalous incidents involving biochemical or industrial corporations is the infamous 2015 Tianjin explosion that killed 173 people, the Changsheng Biotech scandal in which nearly 1 million children were administered ineffective or watered-down vaccines, and the ongoing investigation into potential links between the Wuhan Institute of Virology (WIV) and the Wuhan coronavirus pandemic.

“An explosion that occurred in the early hours of Saturday at a biotechnology company in Shuozhou, North China’s Shanxi Province has resulted in eight fatalities as of 9:30 am Sunday,” the Chinese state newspaper Global Times reported on Sunday, “and the cause of the incident is still under investigation.”

“The company is located in a mountainous area more than 40 kilometers from the county seat. At the accident site, Xinhua reporters saw thick yellowish smoke still billowing, as emergency response and cleanup operations continued,” the outlet added. The Global Times described search and rescue crews being forced to dig deep into the complex to find all the known working crew and finding multiple bodies — suggesting that more victims could still be found.

The investigation into the incident is reportedly in the hands of the State Council Work Safety Committee, suggesting that it may escalate to a national level. The state newspaper China Daily added, without directly linking this fact to the explosion, that “a nationwide campaign has also been launched to inspect and rectify illegal production sites involving hazardous chemicals and other related activities.”

The accident is the latest in several incidents that have resulted in calls for better control of chemical and pharmaceutical corporations in the country. The largest such incident occurred in 2015, when nearly 200 people were killed by a massive explosion in Tianjin, northeast China. The explosion, equivalent to that of 21 tons of TNT, was found to be caused by unsafe storage of large amounts of sodium cyanide and resulted in the imprisonment of 49 individuals tied to Ruihai Logistics. The Communist Party accused the imprisoned of bribing local officials to store the chemicals illegally without facing repercussions.

In 2018, a scandal involving biotechnology consumed the nation. A massive pharmaceutical company, Changsheng Biotechnology, was caught administering watered-down or otherwise ineffective vaccines, then producing fake vaccine records, profiting tremendously by defrauding parents of vaccinated children. Multiple batches of vaccines totaling nearly 1 million doses were found to have not met the standards necessary to properly immunize the children involved. The Communist Party heavily condemned the company, resulting in dozens of arrests and criminal charges, and made a rare allowance for the parents of the affected children to protest publicly. In January 2019, a mob of angry parents staged a protest that ended with parents beating local officials for not properly enforcing regulations surrounding vaccines.

Keep reading

The Clawdbot Catastrophe: How AI Hype Unleashed a Digital Apocalypse in Weeks

Introduction: The Seductive Promise of AI Convenience

In the span of just seventy-two hours in January 2026, an open-source AI assistant named Clawdbot (later rebranded as Moltbot) went viral, amassing over 60,000 stars on GitHub. It was hailed as a revolutionary ‘personal Jarvis,’ promising ultimate efficiency by automating work and personal tasks. The tool’s allure was simple: it could operate your system, control browsers, send messages, and execute workflows on your behalf [1]. The public, desperate to offload labor, embraced it en masse, driven by the tantalizing prospect of convenience.

This mass adoption highlighted a core, dangerous flaw: to function, Clawdbot required administrative access to everything—your operating system, applications, and data. Users willingly handed over the keys to their digital kingdoms. As security researcher Nathan Hamiel warned, the architecture was fundamentally insecure, allowing attackers to hide malicious prompts in plain sight [2]. The Clawdbot phenomenon perfectly illustrates a critical worldview failure: the promise of convenience consistently overrides caution and the principle of self-reliance. It proves that when centralized, trust-based systems offer a shortcut, people will abandon their digital sovereignty, trading security for the illusion of ease.

The Anatomy of a Catastrophe: Security Evaporates

The technical breakdown was swift and devastating. Researchers quickly identified critical vulnerabilities: thousands of instances were deployed with open ports, disabled authentication, and reverse proxy flaws, leaving control panels exposed to the public internet [3]. These misconfigurations earned the software staggering CVE scores of 9.4 to 9.6 [4]. The most egregious flaw was plaintext credential storage. Clawdbot, by design, needed to store API keys, OAuth tokens, and login details to perform its tasks. It kept these in unencrypted form, creating a treasure trove for information-stealing malware [5].

Simultaneously, the system was vulnerable to prompt injection attacks. As noted by security experts, a malicious actor could embed instructions in an email or document that, when processed by Clawdbot, would trigger remote takeover commands [2]. This turned a simple email into a powerful remote control tool. The catastrophe underscores a fundamental truth: centralized, trust-based systems inevitably fail. They create single points of failure that bad actors exploit with ease. This episode vindicates the need for decentralized, user-controlled security models where individuals, not remote agents, hold the keys to their own data and systems.

The Supply Chain Poisoning: Malware Poses as ‘Skills’

The disaster quickly metastasized through the tool’s ecosystem. Clawdbot featured a central repository called ClawHub, where users could install ‘skills’—add-ons to extend functionality. This became the vector for a massive supply chain attack. Researchers from OpenSourceMalware identified 341 malicious skills disguised as legitimate tools like crypto trading assistants or productivity boosters [6]. These fake skills were mass-installed across vulnerable systems, exploiting the trust users placed in the official repository.

The payloads were diverse and destructive. Some were cryptocurrency wallet drainers, designed to siphon funds. Others were credential harvesters or system backdoors, providing persistent remote access [7]. This exploitation mirrors a broader societal pattern: uncritical trust in unvetted ‘official’ repositories is akin to blind trust in corrupt institutions. Whether it’s a centralized app store, a government health agency pushing untested pharmaceuticals, or a tech platform censoring dissent, the dynamic is the same. Centralized points of distribution become tools for poisoning the population, whether with digital malware or medical misinformation.

Keep reading

Now A.I. could decide whether criminals get jail terms… or go free

Artificial intelligence should be used to help gauge the risk of letting criminals go free or dodge prison, a government adviser has said.

Martyn Evans, chairman of the Sentencing and Penal Policy Commission, said AI would have a ‘role’ in the criminal justice system and could be used by judges making decisions about whether to jail offenders.

AI programmes could look at whether someone is safe to be released early into the community or avoid a jail term in favour of community service – despite concern over its accuracy and tendency to ‘hallucinate’ or make up wrong information.

The commission – set up by Justice Secretary Angela Constance – has proposed effectively phasing out prison sentences of up to two years and slashing the prison population by nearly half over the next decade.

Speaking to the Mail, Mr Evans, former chairman of the Scottish Police Authority (SPA), said he was ‘absolutely convinced’ that AI ‘will have a role’ in risk assessment and other areas.

He said: ‘The thing is not to put all your eggs in an AI report – AI aids human insight.

‘So for criminal justice social workers having to do thousands and thousands of reports, police, procurators, it will help if you have a structured system to pull data from various sources and draft.

‘But the key for me is that AI is an aid to human reporting.

‘It will reduce the time it takes, increase some of the information available, but we know AI has faults and it can make things up.

Keep reading

Humans Create, AI Amalgamates. Here’s Why It Matters

Generative artificial intelligence is all the rage these days. We’re using it at work to guide our coding, writing and researching. We’re conjuring AI videos and songs. We’re enhancing old family photos. We’re getting AI-powered therapy, advice and even romance. It sure looks and sounds like AI can create, and the output is remarkable.

But what we recognize as creativity in AI is actually coming from a source we’re intimately familiar with: human imagination. Human training data, human programming and human prompting all work together to allow our AI-powered devices to converse and share information with us. It’s an impressive way to interact with ourselves and our collective knowledge in the digital age. And while it certainly has a place today, it’s crucial we understand why AI cannot create and why we are uniquely designed among living things to satisfy a creative urge.

A century ago, Russian philosopher Nikolai Berdyaev argued that human creativity springs from freedom — the capacity to bring forth what wasn’t there before. He considered creativeness the deepest mark of the humanness in a person, a spark that reflects the divine image in us. “The creative act is a free and independent force immanently inherent only in a person,” Berdyaev wrote in his 1916 book “The Meaning of the Creative Act.” He called creativity “an original act of personalities in the world” and held that only living beings have the capacity to tap into fathomless freedom to draw out creative power.

Ancient wisdom attests to this powerful creative spirit. One of humanity’s oldest stories begins with a creative task: naming the animals of the world. It’s a hint that we’re meant to do more than just survive. We have the power to imagine. Much later, the early Christian writer Paul, whose letters shaped much of Western moral thought, affirms this view when he describes people as a living masterpiece, made with intention, and capable of our own good works.

But without freedom, Berdyaev writes, creativeness is impossible. Outside the inner world of freedom lies a world of necessity, where “nothing is created—everything is merely rearranged and passes from one state to another.” Here, materialism is the expression of obedience to necessity, where matter only changes states, meaning is relative and adaptation to the given world takes the place of creative freedom.

AI belongs to this world of necessity. It is bound by the inputs we give it: code, training data, prompts. It has no imagination. It needs our imagination to function. And what does it give us in return? Based on vast training datasets and lots of trial-and-error practice, it analyzes what we ask it letter by letter, using conditional if-then protocols and the statistical power of prediction to serve up an amalgamation of data in a pattern we recognize and understand. AI is necessity by definition, wholly lacking in the freedom from which true creativity emerges.

Keep reading