Macron’s AI Clown Show: Europe’s Digital Dilemma

The European Union has lost its place in the global race for artificial intelligence. In a single tweet on platform X, France’s President Emmanuel Macron inadvertently outlined the convoluted situation while simultaneously revealing his personal emotional fragility.

The leading representatives of the European Union like to present themselves as emotionless technocrats. Maintaining the greatest possible distance from citizens, they execute their agenda of societal transformation toward what they understand as a net-zero transformation economy. 

This ostentatious distance from the citizenry acts as a simulacrum of power, which, in politicians like Emmanuel Macron, often veers into the caricatural.

Macron’s striking presence in foreign affairs—whether regarding the Ukraine war or recurring provocations toward the United States—correlates with his aggressive censorship policy toward his own population. A president without a people, steering his minority government through a budgetary crisis that brings France ever closer to the fiscal abyss.

In Macron’s persona, the European misstep is condensed: economically failed, deeply unpopular among his own people, geopolitically essentially irrelevant—and yet imbued with lofty, messianic plans. 

This performative play of power, coupled with hardly disguised impotence and incompetence, inevitably produces an effect that can be described as clownish. It is the expression of a political style that can no longer reconcile claim with reality—and thus delivers less leadership than a tragicomic performance.

Keep reading

Meta Considers Timed Face Recognition Launch to Exploit Distracted Society

Meta is weighing whether to add face recognition to its camera-equipped smart glasses, and The New York Times obtained an internal company document that reveals more than just the plan itself.

It reveals how Meta thinks about when to launch it: “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

Read that plainly: Meta wants to release a mass biometric surveillance product while the people most likely to fight it are too distracted to respond.

The technology would scan the face of every person who enters the glasses’ field of view, building a faceprint to match against a database. Every passerby. Every stranger on the subway. Every person who happens to walk through the frame of someone else’s device. None of them consented. Most of them won’t even know they were captured.

Faceprints are among the most sensitive data a company can collect. Unlike a password, a face cannot be changed after a breach. Once collected, this data enables mass surveillance, fuels discrimination, and creates a permanent identification trail attached to a person’s physical movement through the world.

Putting that capability into wearable glasses carried by ordinary people in ordinary places moves it off servers and into every room, street, and gathering that people enter.

Meta ran this experiment before and lost.

The company shut down (only kind of) its photo face-scanning tool in November 2021, simultaneously announcing it would delete (if you believe them) over a billion stored face templates. That retreat came after years of mounting legal exposure that produced a very expensive record.

In July 2019, Facebook settled a Federal Trade Commission investigation for $5 billion. The allegations included that the company’s face recognition settings were confusing and deceptive, and the settlement required the company to obtain consent before running face recognition on users going forward.

Less than two years later, Meta agreed to pay $650 million to settle a class action brought by Illinois residents under that state’s biometric privacy law. Then, in July 2024, it settled with Texas for $1.4 billion over the same defunct system. Nearly $7 billion across three settlements, all tied to face recognition practices the company ultimately abandoned.

Keep reading

Ring Cancels Flock Safety Integration After Public Backlash

Public backlash has forced Ring to cancel its partnership with Flock Safety, the law enforcement surveillance company whose camera network has reportedly given ICE and other federal agencies access to footage across the country.

Ring announced the cancellation this week, saying the integration never went live.

The company’s statement was careful:

“Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners…The integration never launched, so no Ring customer videos were ever sent to Flock Safety.”

That last sentence is doing a lot of work. Ring users responding to the Flock announcement went further than strongly worded tweets. People smashed cameras. Others announced publicly that they were throwing their devices away. The Amazon-owned company had badly misread the moment.

Flock Safety is a surveillance technology company that operates a nationwide network of AI-powered cameras, primarily known for license plate readers, and sells access to the resulting database of vehicle movements to roughly 5,000 law enforcement agencies across the United States.

The Flock partnership was announced back in October 2025, and you may remember the feature report How Amazon Is Turning Your Neighborhood Into a Police Database, which gave deeper insight into the plans.

It got pushback at the time, but only became a bigger crisis after the recent outrage some cities have shown to ICE enforcement activity, when social media posts claimed Ring was providing a direct pipeline through Flock to ICE.

That specific claim isn’t accurate, since the Flock connection never went live. But Ring’s broader relationship with the police is real and extensive, which gave the fear enough traction to land.

Keep reading

This Is The LOCUST Laser That Reportedly Prompted Closing El Paso’s Airspace

An AeroVironment LOCUST laser directed energy weapon owned by the U.S. Army was central to the chain of events that led to the recent shutdown of airspace around El Paso, Texas, according to Reuters. Though many questions still remain to be answered about how the flight restrictions came to be imposed, LOCUST was designed to respond to exactly the kinds of drones that regularly fly across the southern border from Mexico.

Readers can get caught up on what is known about the clampdown in the skies above El Paso on Wednesday in initial reporting here.

Multiple outlets had already reported yesterday that the use of a laser counter-drone system was a key factor in the Federal Aviation Administration’s (FAA) sudden decision to impose the temporary flight restrictions over El Paso. Reuters‘ report says “two people briefed on the situation” identified the laser system in question as LOCUST. TWZ has reached out to AeroVironment and the U.S. Army for more information. U.S. Northern Command (NORTHCOM), which oversees U.S. military operations in and around the homeland, declined to comment.

Last July, the U.S. military released a picture, seen below, showing Army personnel assigned to Joint Task Force-Southern Border (JTF-SB) conducting sling-load training with a LOCUST mounted on a 4×4 M1301 Infantry Squad Vehicle (ISV) at Fort Bliss. This had prompted some speculation that LOCUST systems might be in use along the U.S. border with Mexico. JTF-SB was established in March 2025 to oversee a surge in U.S. military support to the border security mission. Fort Bliss, situated in El Paso, is a major hub for those operations. It is also home to the 1st Armored Division and a significant number of Army air defense units.

Keep reading

AI Safety Researcher Resigns With ‘World Is in Peril’ Warning

An artificial intelligence (AI) safety researcher has resigned with a cryptic warning that the “world is in peril.”

Mrinank Sharma, who joined large language model developer Anthropic in 2023, announced his departure on X in an open letter to colleagues on Feb. 9. He was the leader of a team that researches AI safeguards.

In his letter, Sharma said he had “achieved what I wanted to here,” citing contributions such as investigating why generative AI models prioritize flattering users over providing accurate information, developing defenses to prevent terrorists from using AI to design biological weapons, and trying to understand “how AI assistants could make us less human.”

Although he said he took pride in his work at Anthropic, the 30-year-old AI engineer wrote that “the time has come to move on,” adding that he had become aware of a multitude of crises that extend beyond AI.

“I continuously find myself reckoning with our situation,” Sharma wrote. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.

“[Throughout] my time here, I’ve repeatedly seen how hard it is truly let our values govern actions,” he added. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

Keep reading

Amazon’s Ring and Google’s Nest Unwittingly Reveal the Severity of the U.S. Surveillance State

That the U.S. Surveillance State is rapidly growing to the point of ubiquity has been demonstrated over the past week by seemingly benign events. While the picture that emerges is grim, to put it mildly, at least Americans are again confronted with crystal clarity over how severe this has become.

The latest round of valid panic over privacy began during the Super Bowl held on Sunday. During the game, Amazon ran a commercial for its Ring camera security system. The ad manipulatively exploited people’s love of dogs to induce them to ignore the consequences of what Amazon was touting. It seems that trick did not work.

The ad highlighted what the company calls its “Search Party” feature, whereby one can upload a picture, for example, of a lost dog. Doing so will activate multiple other Amazon Ring cameras in the neighborhood, which will, in turn, use AI programs to scan all dogs, it seems, and identify the one that is lost. The 30-second commercial was full of heart-tugging scenes of young children and elderly people being reunited with their lost dogs.

But the graphic Amazon used seems to have unwittingly depicted how invasive this technology can be. That this capability now exists in a product that has long been pitched as nothing more than a simple tool for homeowners to monitor their own homes created, it seems, an unavoidable contract between public understanding of Ring and what Amazon was now boasting it could do.

Keep reading

Axios: Insiders Are in a Panic About the Dangers AI Poses

Yesterday I wrote about concerns from economists that the quick adoption of AI might mean a significant disruption of the job market. There seemed to be a wary realization that AI was probably going to do away with some jobs permanently but whether that change would necessarily create a crisis in the marketplace depended on how fast the change happened. If it took ten years, they the economy would adjust. If it took half that, we might have a problem.

Today, Axios has a story highlighting concerns coming not from economists but from people inside the AI industry, several of whom have recently expressed serious concern about how fast things were moving.

His letter reads in part, “The world is in peril. And not just from AI or bioweapons but from a whole series of interconnected crises unfolding in this very moment.” In a footnote he mentions that some people are calling it the “poly-crisis.”

Another researcher at OpenAI also expressed some concern about where things were heading as AI became more competent.

Keep reading

The Lost Dog That Made Constant Surveillance Feel Like a Favor

Amazon picked the Super Bowl for a reason. Nothing softens a technological land grab like a few million viewers, a calm voice, and a lost dog.

Ring’s commercial introduced “Search Party,” a feature that links doorbell cameras through AI and asks users to help find missing pets. The tone was gentle despite the scale being enormous.

Jamie Siminoff, Ring’s founder, narrated the ad over images of taped-up dog posters and surveillance footage polished to look comforting rather than clinical. “Pets are family, but every year, 10 million go missing,” he said. The answer arrived on cue. “Search Party from Ring uses AI to help families find lost dogs.”

This aired during a broadcast already stuffed with AI branding, where commercial breaks felt increasingly automated. Ring’s spot stood out because it described a system already deployed across American neighborhoods rather than a future promise.

Search Party lets users post a missing dog alert through the Ring app. Participating outdoor cameras then scan their footage for dogs resembling the report. When the system flags a possible match, the camera owner receives an alert and can decide whether to share the clip.

Siminoff framed the feature as a community upgrade. “Before Search Party, the best you could do was drive up and down the neighborhood, shouting your dog’s name in hopes of finding them,” he said.

The new setup allows entire neighborhoods to participate at once. He emphasized that it is “available to everyone for free right now” in the US, including people without Ring cameras.

Amazon paired the launch with a $1 million initiative to equip more than 4,000 animal shelters with Ring systems. The company says the goal is faster reunification and shorter shelter stays.

Every element of the rollout leaned toward public service language.

The system described in the ad already performs pattern detection, object recognition, and automated scanning across a wide network of private cameras.

The same system that scans footage for a missing dog already supports far broader forms of identification. Software built to recognize an animal by color and shape also supports license plate reading, facial recognition, and searches based on physical description.

Ring already operates a process that allows police to obtain footage without a warrant under situations they classify as emergencies. Once those capabilities exist inside a shared camera network, expanding their use becomes a matter of policy choice rather than technical limitation.

Ring also typically enables new AI features by default, leaving users responsible for finding the controls to disable them.

Keep reading

Mystery Biotech Explosion Kills 8 in China, Company Legal Rep Arrested

Chinese state media agencies confirmed a massive explosion taking place at a facility owned by a biotechnology company killed at least eight people in Shanxi, northern China this weekend.

Multiple Asian news outlets identified the company involved as Shanyin Jiapeng Bio-Technology, which reportedly manufactures a host of chemicals including agricultural products and paint. None of the reports on the incident indicate any known reason for the explosion, indicating that investigations are still ongoing. The government’s Xinhua News Agency reported that the Communist Party had detained the company’s legal representative, stating that he or she was “placed under control” without any details. It remains unclear at press time why the legal representative, and no other employee of the company, was targeted.

China has a long history of industrial, chemical, and scientific research accidents, as well as corporate misconduct and corruption. Among the various scandalous incidents involving biochemical or industrial corporations is the infamous 2015 Tianjin explosion that killed 173 people, the Changsheng Biotech scandal in which nearly 1 million children were administered ineffective or watered-down vaccines, and the ongoing investigation into potential links between the Wuhan Institute of Virology (WIV) and the Wuhan coronavirus pandemic.

“An explosion that occurred in the early hours of Saturday at a biotechnology company in Shuozhou, North China’s Shanxi Province has resulted in eight fatalities as of 9:30 am Sunday,” the Chinese state newspaper Global Times reported on Sunday, “and the cause of the incident is still under investigation.”

“The company is located in a mountainous area more than 40 kilometers from the county seat. At the accident site, Xinhua reporters saw thick yellowish smoke still billowing, as emergency response and cleanup operations continued,” the outlet added. The Global Times described search and rescue crews being forced to dig deep into the complex to find all the known working crew and finding multiple bodies — suggesting that more victims could still be found.

The investigation into the incident is reportedly in the hands of the State Council Work Safety Committee, suggesting that it may escalate to a national level. The state newspaper China Daily added, without directly linking this fact to the explosion, that “a nationwide campaign has also been launched to inspect and rectify illegal production sites involving hazardous chemicals and other related activities.”

The accident is the latest in several incidents that have resulted in calls for better control of chemical and pharmaceutical corporations in the country. The largest such incident occurred in 2015, when nearly 200 people were killed by a massive explosion in Tianjin, northeast China. The explosion, equivalent to that of 21 tons of TNT, was found to be caused by unsafe storage of large amounts of sodium cyanide and resulted in the imprisonment of 49 individuals tied to Ruihai Logistics. The Communist Party accused the imprisoned of bribing local officials to store the chemicals illegally without facing repercussions.

In 2018, a scandal involving biotechnology consumed the nation. A massive pharmaceutical company, Changsheng Biotechnology, was caught administering watered-down or otherwise ineffective vaccines, then producing fake vaccine records, profiting tremendously by defrauding parents of vaccinated children. Multiple batches of vaccines totaling nearly 1 million doses were found to have not met the standards necessary to properly immunize the children involved. The Communist Party heavily condemned the company, resulting in dozens of arrests and criminal charges, and made a rare allowance for the parents of the affected children to protest publicly. In January 2019, a mob of angry parents staged a protest that ended with parents beating local officials for not properly enforcing regulations surrounding vaccines.

Keep reading

The Clawdbot Catastrophe: How AI Hype Unleashed a Digital Apocalypse in Weeks

Introduction: The Seductive Promise of AI Convenience

In the span of just seventy-two hours in January 2026, an open-source AI assistant named Clawdbot (later rebranded as Moltbot) went viral, amassing over 60,000 stars on GitHub. It was hailed as a revolutionary ‘personal Jarvis,’ promising ultimate efficiency by automating work and personal tasks. The tool’s allure was simple: it could operate your system, control browsers, send messages, and execute workflows on your behalf [1]. The public, desperate to offload labor, embraced it en masse, driven by the tantalizing prospect of convenience.

This mass adoption highlighted a core, dangerous flaw: to function, Clawdbot required administrative access to everything—your operating system, applications, and data. Users willingly handed over the keys to their digital kingdoms. As security researcher Nathan Hamiel warned, the architecture was fundamentally insecure, allowing attackers to hide malicious prompts in plain sight [2]. The Clawdbot phenomenon perfectly illustrates a critical worldview failure: the promise of convenience consistently overrides caution and the principle of self-reliance. It proves that when centralized, trust-based systems offer a shortcut, people will abandon their digital sovereignty, trading security for the illusion of ease.

The Anatomy of a Catastrophe: Security Evaporates

The technical breakdown was swift and devastating. Researchers quickly identified critical vulnerabilities: thousands of instances were deployed with open ports, disabled authentication, and reverse proxy flaws, leaving control panels exposed to the public internet [3]. These misconfigurations earned the software staggering CVE scores of 9.4 to 9.6 [4]. The most egregious flaw was plaintext credential storage. Clawdbot, by design, needed to store API keys, OAuth tokens, and login details to perform its tasks. It kept these in unencrypted form, creating a treasure trove for information-stealing malware [5].

Simultaneously, the system was vulnerable to prompt injection attacks. As noted by security experts, a malicious actor could embed instructions in an email or document that, when processed by Clawdbot, would trigger remote takeover commands [2]. This turned a simple email into a powerful remote control tool. The catastrophe underscores a fundamental truth: centralized, trust-based systems inevitably fail. They create single points of failure that bad actors exploit with ease. This episode vindicates the need for decentralized, user-controlled security models where individuals, not remote agents, hold the keys to their own data and systems.

The Supply Chain Poisoning: Malware Poses as ‘Skills’

The disaster quickly metastasized through the tool’s ecosystem. Clawdbot featured a central repository called ClawHub, where users could install ‘skills’—add-ons to extend functionality. This became the vector for a massive supply chain attack. Researchers from OpenSourceMalware identified 341 malicious skills disguised as legitimate tools like crypto trading assistants or productivity boosters [6]. These fake skills were mass-installed across vulnerable systems, exploiting the trust users placed in the official repository.

The payloads were diverse and destructive. Some were cryptocurrency wallet drainers, designed to siphon funds. Others were credential harvesters or system backdoors, providing persistent remote access [7]. This exploitation mirrors a broader societal pattern: uncritical trust in unvetted ‘official’ repositories is akin to blind trust in corrupt institutions. Whether it’s a centralized app store, a government health agency pushing untested pharmaceuticals, or a tech platform censoring dissent, the dynamic is the same. Centralized points of distribution become tools for poisoning the population, whether with digital malware or medical misinformation.

Keep reading