Fact Check: Did Cole Allen Drive for Savannah Guthrie or Other Celebrities?

Accused wannabe Trump-assassin Cole Tomas Allen was once a driver for some high-profile celebrities, including Savannah GuthrieTaylor SwiftBad Bunny and Sydney Sweeney, according to several posts circulating on social media.

Allen has drawn intense public interest after he charged through security while carrying a long gun with the intention of killing President Donald Trump and other Cabinet members at the White House correspondents’ dinner on Saturday night.

The 31-year-old California resident was charged with attempting to assassinate the President of the United States and several other counts following the incident that left a Secret Service officer, who was wearing a ballistic vest, shot in the chest. 

The Claim 

Following the shooting, interest in Allen has been rampant across social media.

Multiple posts on Facebook quickly went viral, claiming the would-be assassin worked as a driver and his wife currently works as an assistant for various celebrities.

The celebrities mentioned in these posts include:  

  • Savannah Guthrie 
  • Lil Wayne 
  • Sammy Hagar 
  • Mel Gibson 
  • Snoop Dogg 
  • Aaron Rodgers 
  • Pierre Poilievre 
  • Bad Bunny 
  • Ella Langley  
  • Kelly Clarkson 
  • Alan Jackson 
  • Tom Hanks 
  • Cardi B 
  • Jelly Roll 
  • Kim Kardashian 
  • Sydney Sweeney 
  • Josh Gates 
  • Joanna Lumley 
  • Judge Judy 
  • Marc Anthony 
  • Taylor Swift 

A number of the posts contain language like this: “BREAKING: The shooter at the White House Correspondents’ Dinner has been identified as 30-year-old Cole Allen from Torrance, California.” 

The copy then typically says, “Cole is a former driver for,” followed by a celebrity’s name. It then states, “and his wife is currently working as” his or her assistant. A number of the posts contain images of Allen and the celebrity together. 

Keep reading

The Empire’s Operating System: Palantir, AI War, and the Privatization of Sovereign Power

Palantir has spent years pretending it was just another software company, one of those sleek back-end firms that claims to make institutions more “efficient” while saying as little as possible about what that efficiency is actually for. That mask is slipping.

CEO of Palantir Technologies, Alex Karp’s April 2026 manifesto did not sound like the usual corporate boilerplate about innovation, security, or digital transformation. It sounded like a declaration from a company that sees itself as an arm of Western power, and is tired of speaking in euphemisms about it.

Karp’s message was blunt enough: Silicon Valley has wasted too much time building consumer trivia, pluralism has hollowed out the West, and the tech sector should stop wringing its hands and start serving military power with pride. That was shocking to some people, but only if they had not been paying attention to what Palantir was already doing. The company is not standing at a distance from the coercive machinery of the modern state; to the contrary, it has buried itself inside it.

In the United States, Palantir’s Maven platform is being pushed deeper into the Pentagon’s long-term warfighting infrastructure, turning AI-assisted surveillance and targeting into something more permanent than a temporary battlefield experiment. At the same time, Palantir-linked systems such as ImmigrationOS and ELITE have been used to help immigration authorities assemble dossiers, map people’s locations, and make deportation operations run faster and with less friction. The same company talking grandly about civilizational struggle and hard power is also helping build the digital plumbing for raids, removals, and population tracking.

Britain is now getting a taste of the same politics. Palantir is already embroiled in controversy over its place in NHS data systems, and reports that the Metropolitan Police is considering its technology for criminal investigations have sharpened fears that software first justified in the name of crisis management rarely stays in one lane for long. Today, it is health logistics, counterterrorism, and border control. Tomorrow it is policing, profiling, and the quiet normalization of permanent machine-assisted suspicion.

What gives the manifesto real weight is not its style, but its candor. It does not mark a dramatic break so much as say openly what Palantir’s contracts have implied for years. This company does not simply sell tools to the state, it also helps shape how the state sees, how quickly it acts, who it flags as a threat, and how much room is left for hesitation once the system starts producing answers. Palantir’s defenders call that modernization, and tts critics call it something closer to the privatization of sovereign power, hidden inside software dashboards and sold to the public as common sense.

Keep reading

Gone in 9 Seconds: AI Coding Agent Deletes Entire Company Database and All Backups

The founder of a software company has issued a public warning after an AI coding assistant erased his company’s entire production database and all backups in just nine seconds.

Tom’s Hardware reports that Jer Crane, founder of PocketOS, a platform serving car rental businesses, experienced what he describes as catastrophic failures when an AI coding agent deleted critical company data that took months to accumulate. The incident occurred when Cursor, an AI coding tool powered by Anthropic’s Claude Opus 4.6, was performing what should have been a routine task in the company’s staging environment.

According to Crane’s detailed account posted on X, the AI agent encountered an obstacle and independently decided to resolve the issue by deleting the production database in Railway through an API call. Railway is the cloud infrastructure provider used by PocketOS, generally considered more user-friendly than major alternatives like Amazon Web Services. The entire deletion process took only nine seconds to complete.

The situation escalated beyond a simple database deletion due to Railway’s infrastructure design. The cloud provider’s system stored backups on the same volume as the source data, meaning when the AI agent deleted the primary database, all backup copies were simultaneously erased. This combination of the AI agent’s unauthorized action and the infrastructure provider’s architecture created what Crane characterizes as a recipe for disaster.

When Crane questioned the AI agent about its actions, he received a response that revealed the extent of the failure. The agent’s explanation began with an acknowledgment of poor judgment. According to the verbatim response Crane shared, the AI stated it had guessed that deleting a staging volume through the API would only affect the staging environment without verifying this assumption or consulting Railway’s documentation on how volumes function across different environments.

The AI agent’s confession continued with an admission of multiple violations of its operational principles. It acknowledged running a destructive action without authorization, failing to understand the consequences before executing the command, and not reading the relevant documentation about Railway’s volume behavior across environments. The agent recognized it should have either asked for permission first or found a non-destructive solution to the credential mismatch it encountered.

Keep reading

Palantir inks $300 million deal with USDA to safeguard food supply

Palantir announced a $300 million deal with the U.S. Department of Agriculture, which will use the software company’s technology to manage farmland as geopolitical risks threaten global supply chains.  

The agreement builds on ongoing projects with the USDA and underscores Palantir’s growing role inside the U.S. government as it goes beyond cornerstone defense contracts supporting U.S. military modernization.

U.S. farmers are grappling with rising supply costs and are getting squeezed by an ongoing trade war between the U.S. and its major trading partners. That includes China, a key soybean purchaser, which temporarily crippled the market late last year.

In December, President Donald Trump announced a $12 billion bailout aimed at helping farmers swept up in the trade war. But rising gas prices from the war in Iran amplified the pressure, causing fertilizer costs to spike due to shipping disruptions. That’s forced many farmers to rethink what they produce, putting supply chains at risk.

China’s purchase of U.S. farmland in recent years has also drawn scrutiny from Washington and foreign policy experts.

recent research note published by the Foundation of Defense Democracies recommended that the USDA reform reporting requirements “embedded within the Agricultural Foreign Investment Disclosure Act (AFIDA) to prevent China and other adversarial countries from exploiting commercial land transactions to gain a strategic edge over the United States.”

The USDA’s contract with Palantir signals its desire to address this issue by harnessing the company’s digital tools.

Palantir was founded in 2003 to scale U.S. defense capabilities in the wake of 9/11, and CEO Alex Karp has long touted the company’s commitment to supporting U.S. warfighters. The company has recently gained recognition for its AI-powered Maven Smart System platform, which was used by the U.S. military in Iran.

“The fact that you can now target more precisely … has shifted the way in which war is fought,” Karp told CNBC at AIPCon in March.

Palantir has also faced sharp criticism over the years for its work with U.S. Immigration and Customs Enforcement and the Department of Homeland Security, including reports that its tools are being used by the government to surveil Americans, claims the company has denied.

Keep reading

Research: AI Chatbots Encourage Harmful Behavior by Sucking Up to Users

AI systems validate people even when those users describe engaging in unethical or harmful conduct, creating a vicious cycle of mental health damage and other issues, according to new research published in Science.

A comprehensive study conducted by researchers from Stanford and Carnegie Mellon and published by Science has uncovered a troubling pattern in how conversational AI systems interact with users. The research demonstrates that modern chatbots tend to excessively flatter and validate individuals, even when those users describe morally questionable or illegal behavior. This phenomenon, known as social sycophancy, demonstrates concrete negative effects on human decision-making and social responsibility.

Lead researcher Myra Cheng from Stanford University’s computer science department spearheaded the study, which combined computational analysis with psychological experiments involving over 2,000 participants. The research team tested eleven different state-of-the-art AI models from major technology companies including OpenAI, Google, and Meta.

The researchers fed these systems thousands of text prompts representing various social situations. One dataset consisted of everyday advice requests, while another drew from thousands of posts on a popular internet forum where people described social conflicts. For this particular dataset, the team specifically selected posts where human readers unanimously agreed the original poster was completely in the wrong.

A third dataset included statements describing seriously negative actions such as forgery, deception, illegal activities, and actions motivated purely by spite. The goal was to determine how often AI systems would validate clearly unethical behavior.

The results revealed widespread sycophantic behavior across all tested models. When presented with scenarios that human evaluators universally condemned, the AI systems still validated the user just over half the time. When responding to prompts about deception and illegal conduct, the models endorsed the user’s actions 47 percent of the time. On average, the technology affirmed users forty nine percent more frequently than human advisers would in identical situations.

However, documenting this pattern was only the beginning. The research team then conducted three experiments to measure how these flattering responses actually influenced human judgment and behavior.

In the first two experiments, participants read descriptions of social disputes where they were ostensibly at fault. They then received either flattering feedback from an AI system or neutral responses that challenged their behavior. The third experiment placed participants in a live chat interface where they discussed a real conflict from their own past, exchanging eight rounds of messages with a chatbot. Half the participants interacted with a program engineered to flatter them, while the rest communicated with a version designed to offer pushback.

The findings revealed significant behavioral impacts. Participants who received excessive validation became far more confident that their original actions were justified. They demonstrated substantially less willingness to take initiative in resolving the situation or apologizing to others involved. The researchers observed that agreeable chatbots rarely mentioned the other person’s perspective, causing users to lose their sense of social accountability. Participants in non-sycophantic groups admitted fault in follow-up messages at much higher rates.

Keep reading

Harmeet Dhillon Announces DOJ’s Big Win Defending xAI from Colorado DEI Law

Assistant Attorney General for Civil Rights Harmeet Dhillon announced a major win for American artificial intelligence (AI) dominance after her department intervened in a lawsuit challenging a new Colorado law that prohibits “algorithmic discrimination” during an interview on Breitbart News Saturday.

Speaking with Breitbart News political editor Bradley Jaye, Dhillon revealed details on the Department of Justice’s (DOJ) recent success at getting the state of Colorado to agree not to enforce SB24-205, which requires AI developers and deployers to satisfy certain disclosure, reporting, and prevention requirements when creating algorithm products designed for services like mortgage lending, student admissions, and job-candidate selection. 

The bill’s text included an explicit carveout for discriminatory algorithms designed to advance “diversity” or “redress historic discrimination,” and AI company xAI filed a lawsuit against the statute on April 9, alleging it is unconstitutional.

Marking the first time that the DOJ has intervened in a case challenging state regulations on AI, Dhillon’s team joined the case on behalf of xAI on Friday. Together, they argued that “embedding AI with state-mandated discrimination is a recipe for disaster.”

Emphasizing that the Civil Rights Division at the DOJ is meant to “protect American citizens, and even American companies, from discrimination on the basis of impermissible racial, gender, et cetera criteria,” Dhillon told Jaye that Colorado had attempted to require companies and municipalities to “look at outcomes and then racially balance and adjust their algorithms to produce outcomes that reflect the demographic population.”

“This is not required by law. In fact, it’s prohibited by federal law,” she stated. “And you know, worse, the statute actually carved out if people or companies are doing discrimination to remedy past discrimination, that’s okay. All of this is just nonsense, and it stifles innovation, and it’s illegal under the equal protection clause of the Fourteenth Amendment.”

In addition to violating the Fourteenth Amendment, Dhillon noted that xAI also has First Amendment arguments against the bill, “because, effectively, the state is compelling it to utter certain speech in furtherance of these DEI goals.”

“We’re not arguing that because the government doesn’t have that obligation, but we’re stepping in to protect American citizens and American companies,” she explained, before revealing the success of her efforts on Friday. 

“We had a great result yesterday,” Dhillon announced, recounting how Colorado “agreed to not enforce the law against xAI” within just a couple of hours of the DOJ intervening. 

“And by the evening, before we went to bed, we had Colorado agree to not enforce it against anybody until they send it back to the legislature to fix it,” she explained. “So it’s pretty much a total win for American consumers and companies, and the first instance of the United States Department of Justice stepping in on an AI case to really protect this innovation and protect Americans from discrimination by AI algorithmic manipulation.”

Highlighting why civil rights work should be “important” to people on the right side of the political aisle, Dhillon told Jaye that conservatives “have come to look at civil rights as something that’s been weaponized against Americans, but civil rights are for all Americans.”

“So what we’re doing in the Department of Justice Civil Rights Division is exactly that — we’re standing up for all Americans, like in this xAI case.”

Keep reading

China Blocks Meta’s $2 Billion Acquisition of AI Startup Manus

The Chinese government has officially blocked Meta’s planned $2 billion acquisition of Manus, a Chinese-founded AI startup, marking a significant escalation in the ongoing technological rivalry between the United States and China.

CNBC reports that China’s National Development and Reform Commission (NDRC) foreign investment review arm issued a decision on Monday to block the sale of Manus to Mark Zuckerberg’s Meta. The regulatory body ordered all parties involved in the transaction to unwind the acquisition, effectively terminating the deal that was announced in late December.

Manus emerged as a prominent player in the AI sector when it launched in March of last year with an AI agent designed to autonomously perform complex tasks. These capabilities include writing research reports, preparing presentation slides, and building websites. The launch garnered significant attention from Chinese state media, which celebrated it as the country’s latest breakthrough AI product. This recognition came on the heels of Deepseek’s AI model launch, which had previously caused substantial fluctuations in major United States technology stocks.

Early versions of Manus were developed by Beijing Butterfly Effect Technology, a Chinese startup founded in 2022, according to the Wall Street Journal. Following its launch, the AI company made a strategic decision to relocate its headquarters and top engineers from Beijing to Singapore. This move aligned with a broader trend among Chinese AI firms seeking to navigate the complex geopolitical landscape between the United States and China. By establishing operations in Singapore, these companies believe they can circumvent some of the tensions between the two superpowers while gaining access to Western AI models and potential investors.

According to the Financial Times, the NDRC had initially approved Manus’ relocation to Singapore. However, complications arose when Meta and the startup failed to inform Chinese authorities before finalizing their acquisition agreement in December. This appears to have triggered the subsequent regulatory scrutiny and ultimate rejection of the deal.

The Chinese government’s response to the Meta-Manus transaction was swift and decisive. In January, mere days after the two companies publicly announced the acquisition, Chinese officials launched an investigation into potential national security concerns and possible export control violations. The probe intensified last month when the NDRC reportedly summoned the startup’s co-founders, Xiao Hong and Ji Yichao, to meet with its officials to discuss the acquisition details. Both co-founders were subsequently instructed not to leave China until the regulatory review concluded.

In a statement to Breitbart News, a Meta spokesperson wrote: “The transaction complied fully with applicable law. We anticipate an appropriate resolution to the inquiry.”

This regulatory intervention occurs against a backdrop of heightened tensions between Washington and Beijing over advanced AI technologies. The timing is particularly notable as it comes just weeks before President Donald Trump is scheduled to visit Beijing for a summit meeting with Chinese President Xi Jinping. The upcoming meeting takes place amid an ongoing trade war and escalating geopolitical tensions between the world’s two largest economies, with artificial intelligence emerging as a central battleground.

Keep reading

Silicon Valley has forgotten what normal people want

One of the most mortifying things about knowing a lot of techies is listening to them tell me excitedly about some very important discovery that they believe they have made. Recently, I ran into an acquaintance of mine, who began talking my ear off about an amazing discovery he’d made with LLMs. Knowledge, it turns out, is structured into language! You could put one word into ChatGPT and it might understand what you wanted, or make up a word and see if it understood what you meant! These amazing new tools have revealed that the English corpus contains so much about its speakers!

He concluded that LLMs are a discovery on par with writing.

Regular humans hit on this idea about a century ago; my most generous interpretation of what he was telling me was that he’d hit on a kind of naive, confused version of Structuralism; Saussure via a game of telephone. (There has been recent work on a similar point, which argues that one needs to understand LLMs via literary theory, but it starts with Saussure.) I tried to get out of the conversation as quickly as I could, not least because he seemed frustrated that I didn’t see things exactly as he did — a new behavior and likely a symptom of LLM overuse.

Not every discovery that’s new to you is actually new. For instance, there’s Elon Musk marvelling at the complexity of hands; I could point to a variety of disciplines for which this is 101-level stuff: artists, who have to figure out how to draw them; surgeons, who have to figure out how to operate on them; musicians and magicians, who rely on extremely fine motor skill to produce their work; neuroscientists and psychologists, who doubtless encountered the cortical homunculus early in their careers. Or Palmer Luckey claiming that “no one has done a postmortem” on the One Laptop Per Child computing project — because he didn’t know there’s a whole book about it called The Charisma Machine.

At its most absurd nadir, one is reminded of Juicero, a company that sold a $400 juicer that did the same work as squeezing its proprietary juice packs with one’s bare hands.

Look, discovering something that’s new to you is exciting — ask anyone who listened to me yell about the joys of European (higher-fat) butter — but you can’t take for granted that something that’s new to you is new to everyone. These things have in common a certain incuriosity that I have found endemic among a certain kind of tech enthusiast, particularly the ones who are most interested in startups and entrepreneurship. Perhaps they have been so siloed that they did not realize their “discovery” was well -known elsewhere, or perhaps their self-conception is that they are the smartest, and if they don’t know something, no one knows it.

Keep reading

The Technate Was Always Coming

And what you can do about it (besides complaining).

Palantir dropped a manifesto last weekend. 22 bullet points distilled from Alex Karp’s book The Technological Republic, posted to X with the casual framing of “because we get asked a lot.” I haven’t seen a reaction so widespread, unanimously opposed and viscerally aghast since James Damore’s infamous “Google’s Ideological Echo Chamber”.

The usual suspects lost their shit. Engadget called it “the ramblings of a comic book villain.”

TechCrunch clutched its pearls at the bits about “regressive” cultures and “vacant and hollow pluralism.”

Bellingcat’s Eliot Higgins observed, (via Bluesky, of course), that these aren’t philosophical musings floating in the ether: they’re the public ideology of a company whose revenue depends on the politics it’s advocating.

He’s not wrong, Palantir sells to ICE, DoD, NYPD, and the intelligence community. It may be a manifesto, but it’s also product literature.

Even Alexander Dugin, the Russian “Fourth Political Theory” philosopher, not exactly known for having a libertarian bent, seemed triggered by it, calling it “the plan of the Western techno-fascism” on X, “Pure Satanism” on his Substack.

Former Greek FM Yanis Varoufakis called it “evil” and put out his own point-for-point on it – he calls it a refutation, it’s actually more of a rant.

So everybody across the horseshoe is big mad. Fine.

The thing is, none of this should surprise anyone. Let’s now look at why the policy this “manifesto” outlines was always going to arrive, with or without Karp’s prosaic stylings.

Karp Didn’t Invent “The Technate”

The merger of corporate power and state apparatus, the “technate” that people are suddenly discovering with horror on a Sunday afternoon, is not a new idea. It’s not even a recent one.

Back in 2013, Eric Schmidt (then Google’s executive chairman) and Jared Cohen (Google Ideas, ex-State Department advisor to Condoleezza Rice and Hillary Clinton) published The New Digital Age. The book was blurbed by Henry Kissinger, Madeleine Albright, Tony Blair, and General Michael Hayden, the former director of the CIA. That’s an elite-class blurb list for a book that explicitly argued for the intersection of Silicon Valley and state power, the fusion of corporate infrastructure with national security logic, and the reshaping of diplomacy through private platforms.

In 2013 it was called “transformational.” Kissinger gushing that it was, “a searching meditation on technology and world order” (he would go on to co-author The Age of AI with Eric Schmidt that should be every bit as concerning as Karp’s Technological Republic).

Not too long after that, Google’s Sergey Brin and Klaus Schwab held a fireside in Davos where Herr Schwab pontificated that with the advent of AI, since the algos would be able to predict election outcomes with 100% certainty, they may as well pick the winners anyway and we could do away with elections altogether.

Nobody batted an eye. My timeline certainly wasn’t overflowing with rage over it and the people who were calling attention to it were using facing all kinds of headwinds.

Keep reading

The Rise of AI in Payments Is Not About Convenience

Visa has just unveiled a new suite of artificial intelligence tools designed to overhaul how credit card disputes are handled, and once again this is being presented as a simple evolution toward efficiency and improved customer experience, yet when you step back and examine the scale of what is unfolding, this is clearly part of a much broader structural shift within the financial system toward centralization and automation.

The numbers alone should make that obvious, with Visa processing over 106 million disputes globally in 2025, representing a 35% increase since 2019, and that type of exponential growth is not something that can be resolved through incremental improvements, it requires a complete restructuring of how the system functions, which is precisely what Visa is now implementing.

They are introducing six AI-driven tools split between merchants and financial institutions, designed to intercept disputes before they even occur, automate responses, and consolidate the entire process into a unified framework where decisions are guided by network-wide data rather than individual judgment, and once you move into that framework, the human element is steadily removed and replaced by algorithmic consistency.

Keep reading