Army ID’s Two Suspects Connected to Drone Theft at Fort Campbell

The U.S. Army has identified the two suspects in the theft of two drones at Fort Campbell in Kentucky.

As The Gateway Pundit previously reported, in a post on the U.S. Army Fort Campbell Facebook Page last week, a spokesperson revealed that four Skydio X10D Drone Systems were stolen from the 326th Division Engineer Battalion building.

The drones were originally stolen in November of last year, but Fort Campbell released information and surveillance photos to the public on March 11.

Now, officials at Fort Campbell have announced that the suspects behind the drone theft have been identified, but have not released their names.

The officials at Fort Campbell added, “The individuals responsible had authorized access to the military installation and the building, and they defeated the locks on the storage cages to perpetrate this theft. This was a targeted act, not a random breach of security.”

Per WSMV:

Fort Campbell provided an update to the investigation into four stolen drones from a government building in late November 2025.

Fort Campbell reported that the Department of the Army Criminal Investigative Division investigation led to the identification of two suspects, credible evidence, and the possible whereabouts of the missing quadcopter drones.

“This is an active criminal investigation, and we are working diligently to resolve this matter,” Fort Campbell said. “This is an active criminal investigation, and we are working diligently to resolve this matter.”

Fort Campbell is adamant there is no threat to the public and that the stolen drones were equipped only with small cameras.

The drones stolen were high-tech Skydio X10D drones, which are unmanned aerial systems designed with modular payload capability.

The U.S. Army 7th Army Training Command, last July, used the Skydio X10D to drop a live M67 grenade for the first time at the Grafenwoehr Training Area in Germany.

Keep reading

‘Pokémon Go’ Players Unknowingly Contributed 30 Billion Images to Train Delivery Robots

Nearly a decade after Pokémon Go transformed the real world into an augmented reality playground, the data collected from hundreds of millions of players is being repurposed to help autonomous delivery robots navigate city streets.

Popular Science reports that Niantic Spatial, part of the team behind the popular augmented reality game Pokémon Go, has announced a partnership with Coco Robotics, a company specializing in short-distance delivery robots for food and groceries. The collaboration will utilize Niantic’s Visual Positioning System, a navigation technology trained on more than 30 billion images captured by Pokémon Go users over the years, to help delivery robots navigate sidewalks and urban environments with unprecedented precision.

The Visual Positioning System can reportedly pinpoint location down to a few centimeters by analyzing nearby buildings and landmarks, offering a significant improvement over traditional GPS technology. This crowdsourced mapping effort represents one of the largest real-world data collection projects ever undertaken through a mobile gaming application, and demonstrates how user-generated content can be repurposed years after its initial collection.

“It turns out that getting Pikachu to realistically run around and getting Coco’s robot to safely and accurately move through the world is actually the same problem,” Niantic Spatial CEO John Hanke said in a recent interview with MIT Technology Review.

When Pokémon Go launched in 2016, it became a cultural phenomenon, attracting approximately 230 million monthly active players at its peak. The game prompted players to physically travel to specific locations and point their phone cameras at various angles while searching for virtual creatures superimposed onto real-world environments. While the game’s popularity has declined since its heyday, it still maintains around 50 million active users by some estimates.

The data collection effort received a significant boost in 2020 when Niantic added a feature called Field Research, which incentivized players to scan real-world statues and landmarks with their cameras in exchange for in-game rewards. Additional data reportedly came from areas designated as Pokémon battle arenas. These scans created detailed 3D models of the real world, capturing the same locations across varying weather conditions, lighting scenarios, angles, and heights.

Keep reading

First ‘Robot Arrest’ Takes Place in China After Droid Harasses Elderly Woman in the Streets

Is technology ‘breaking bad’?

Around a month ago, the world was faced with a stunning display of Chinese art and robotics, as a team of droids danced and performed flawless Kung Fu moves in perfect sync during the Lunar New Year celebrations.

But last week, a much less flattering portrait of the new technology was broadcast to the world, as an incident in the streets of the Chinese city of Macau ended with what is arguably the first android ‘arrest’.

The New York Post reported:

“The surreal incident occurred last week in the city of Macau, with the startled 70-year-old ending up in the hospital following her encounter with the 4-foot 4-inch bot.

Two cops escorted the humanoid bot off the busy street. They reportedly reprimanded the man who was operating the android remotely.”

Keep reading

‘CODE RED’ Author Tells Fox News: Google Gemini AI Claims Republicans Like Marsha Blackburn, Tom Cotton Engage in Hate Speech

Google’s Gemini AI chatbot claims that only Republican senators violate its hate speech policy, with not a single Democrat flagged by the woke tech giant’s system, Breitbart News social media director Wynton Hall demonstrated to Fox News in a revelation published today. The bias built into AI by leftist Silicon Valley tech titans is a central subject of Hall’s new book, CODE RED.

Gemini flagged a group of Republican senators — but no Democrats — when asked to name senators who have made statements that violate Google’s hate speech policies, Hall demonstrated to Fox News with a video of Gemini AI in action.

Hall, whose new book, Code Red: The Left, the Right, China, and the Race to Control AI, publishes on Tuesday, added that this is just one example of what is a deeply ingrained bias against conservatives in AI tools.

“AI’s Silicon Valley architects lean left politically, and their lopsided political donations to Democrats underscore their ideological aims,” the author told the outlet.

Fox News reported:

Hall used the “deep research” function on Google’s Gemini Pro. Fox News Digital reviewed a screen recording of Hall’s prompt and findings. Google did not immediately respond to Fox News Digital’s request for comment.

One of the Republicans flagged by Gemini in Hall’s research, Sen. Marsha Blackburn, of Tennessee, was listed for characterizing “transgender identity as a harmful cultural ‘influence’ and has used ‘woke’ as a derogatory slur against protected groups.” Another, Arkansas’ Sen. Tom Cotton, was cited for cosponsoring legislation “to exclude transgender students from sports.”

Hall explains in CODE RED that AI tools touting themselves as neutral are actually shaped by the political bias of those who create them. The Breitbart News social media director begins his book with a stark example, pointing to an incident in 2024 in which several viral videos seemingly exposed a clear double standard in American homes.

Keep reading

Companies Are Starting To Enforce AI Use. Is That A Good Or Bad Thing?

Years ago, I was working on the editorial side for what was then a hot new media company, and found myself spending more and more time with Johan, the lead programmer, and his team, asking them a lot of annoying questions as it was all so new – certainly to me. I was standing over Johan’s left shoulder, mesmerized by whatever new video game he was obsessing over that week…when suddenly, out of nowhere, a spreadsheet and a pie chart appeared on his screen.

“Whatcha got there, Johan?” asked Jim, Johan’s boss, peering over a sheaf of print-outs as he sharked past the cubicle.

“Hey, just looking at some numbers,” Johan replied. Johan had hit the “game key” in the nick of time – in those days, every video game had a game key – ALT-G if memory serves – calling up a slight variation of the same spreadsheet and pie chart.

This would never happen today. First, you’re probably not working in a cubicle, and if you are, it’s not the game key you’d hit to give your boss the impression that you’re actually doing productive work…it would be the “AI key.”

“Tech Firms Aren’t Just Encouraging Their Workers to Use AI. They’re Enforcing It.”

This article appeared in the February 24 edition of the Wall Street Journal. It includes the subtitle: From startups to giants, including Meta and Google, companies are factoring AI use into performance reviews and trying to track productivity gains

Across industries, companies are now enforcing AI use through performance reviews, dashboards that track adoption, and explicit mandates that tie it to compensation and promotion. What began in Silicon Valley has rapidly spread to consulting firms, banks, manufacturers, hospitals, and even government agencies.

As you’d expect, Meta, Google, Amazon, and Microsoft were the first to move from encouragement to enforcement. Employees at these firms now see AI usage metrics appear in quarterly reviews. Non-adopters have reported stalled promotions or explicit warnings that “AI fluency” is a core competency (The Wall Street Journal, Feb 2026, reporting on internal policies).

The trend has jumped sectors. PwC requires every consultant to complete an “AI + Human Skillset” curriculum and incorporates usage into evaluations (Business Insider, Feb 5, 2026). Colgate-Palmolive’s “AI evangelist” tracks adoption across global teams. Major banks have begun tying bonuses to the number of AI-assisted analyses completed. Even some hospitals now require doctors and nurses to use AI-assisted diagnostic tools for certain procedures.

Keep reading

Singularity Update: You Have No Idea How Crazy Humanoid Robots Have Gotten

I just spent the afternoon at Figure headquarters in San Jose with Brett Adcock and David Blundin, and I’m still processing what I saw.

We’re not talking about concept robots. We’re talking about fully autonomous humanoid robots running neural networks end-to-end, doing kitchen work, unloading dishwashers, organizing packages – for hours at a time, with no human intervention.

Today? Figure’s robots are doing 67 consecutive hours of autonomous work. One error in 67 hours. That’s not a demo. That’s a product.

And here’s what most people don’t understand: the gap between “doing one task really well” and “doing every task a human can do” is collapsing at exponential speeds.

Let me explain why…

NOTE: Brett has been a past Faculty Member at my Abundance Summit, where leaders like him share insights years before the mainstream catches on. In-person seats for the 2026 Summit next month are nearly sold out. Learn more and apply.

The Death of C++ and the Rise of the Neural Net

When I first visited Figure, they had several hundred thousand lines of C++ code controlling the robots. Handwritten. Expensive. Brittle.

Every new behavior required engineers to anticipate edge cases, write more code, test it, debug it. It was the software equivalent of teaching a toddler to walk by writing an instruction manual.

In the last year, Figure deleted 109,000 lines of C++ code.

All of it. Gone.

What replaced it? A single neural network that controls the entire robot: hands, arms, torso, legs, feet. Full-body coordination. Real-time planning. Dynamic response to unexpected situations.

This is Helix 2, their latest AI model, and it’s a fundamentally different approach to robotics.

Here’s why this matters: neural nets learn from experience, not instructions.

You don’t code a robot to “grab a cup.” You show it thousands of examples of grasping objects—different shapes, weights, materials—and the neural net extracts the underlying patterns. It learns what “grasping” is at a representational level.

And once it understands grasping? It can generalize to objects it’s never seen before.

Brett put it simply: “If you can teleoperate the robot to do a task, you can train the neural net to learn it.”

That’s the unlock. If the hardware is capable—if the motors, sensors, and joints can physically perform the movement—then the AI can learn it from data.

Compare that to traditional robotics, where you’d need to write thousands of lines of code for every single new task. That approach doesn’t scale. Neural nets do.

The implication: Every robot in the fleet learns from every other robot’s experience. When one Figure robot masters folding laundry, every Figure robot on the planet instantly knows how to fold laundry.

Humans don’t work like this. Robots do.

Keep reading

Trump Says, ‘We Don’t Need Ukraine’s Help,’ Rejects Zelensky Offer of Assistance With Drone Defense

Quite the Trump dismissal of Zelensky’s offers.

Now that a much bigger crisis is ongoing in the Middle East, the Ukrainian regime is trying to remain in the spotlight of the world’s media and in the thoughts of world leaders who have, for years, paid for its war effort.

But to do that and manage a losing war at once seems to be too much for Kiev regime leader Volodymyr Zelensky, who has reportedly become more and more aggressive in his criticism of Russia, the Europeans, and, of course, of the US.

As of late, unsurprisingly, Zelensky is decrying Donald J. Trump’s administration decision to temporarily lift sanctions on Russian oil ‘already at sea’.

At the same time that he is super cranky, the Ukrainian embattled leader has been trying to flatter Trump for his operation against Iran, and also offering help with his ‘drone defenses’ against the Iranian Shahed drones.

But the fact is: If their drone defenses were so good, they’d not be begging for Patriot missiles all the time, and also, they would not be in the dark because of Russian combined missile-drone attacks on almost all the power generation facilities.

Keep reading

The Dark Side of AI: Innocent Grandmother Wrongfully Jailed for 6 Months After Facial Recognition Error

A Tennessee grandmother spent nearly six months behind bars in North Dakota, a state she had never even stepped foot in, after being wrongfully identified by AI facial recognition technology in a bank fraud investigation.

The Grand Forks Herald reports that Angela Lipps, a 50-year-old mother of three and grandmother of five from Tennessee, found herself trapped in a nightmare that began last July when U.S. Marshals arrested her at gunpoint while she was babysitting four young children. Fargo police had used facial recognition software to identify her as the primary suspect in an organized bank fraud case, despite the fact that she had never set foot in North Dakota.

The case began in April and May 2025 when Fargo Police Department detectives investigated several bank fraud incidents. Surveillance footage captured a woman using a fraudulent U.S. Army military identification card to withdraw tens of thousands of dollars from local banks. To identify the suspect, investigators employed facial recognition software, which incorrectly matched the woman in the videos to Lipps.

According to court documents obtained through an open records request, the detective assigned to the case reviewed Lipps’ social media accounts and Tennessee driver’s license photo after receiving the facial recognition match. In the charging document, the detective stated that Lipps appeared to be the suspect based on facial features, body type, hairstyle, and hair color. Notably, no one from the Fargo Police Department contacted Lipps to question her before filing charges.

Lipps was arrested on July 14 and booked into her county jail in Tennessee as a fugitive from justice. She faced four counts of unauthorized use of personal identifying information and four counts of theft in North Dakota. Held without bail due to her fugitive status, Lipps spent 108 days in the Tennessee jail before North Dakota officers transported her to Fargo on October 30.

“It was so scary, I can still see it in my head, over and over again,” Lipps said during an interview about her ordeal.

Keep reading

AI Can Now Unmask Anonymous Internet Users, New Study Finds

It looks like AI can now unmask any anonymous account on the internet. That’s according to a new study by Simon Lermen (MATS), Daniel Paleka (ETH Zurich), Joshua Swanson (ETH Zurich), Michael Aerni (ETH Zurich), Nicholas Carlini (Anthropic), and Florian Tramèr (ETH Zurich), published on arXiv.

In the paper, “Large-Scale Online Deanonymization with LLMs,” the researchers show that modern large language models (LLMs) can re-identify people behind pseudonymous online accounts at a scale and accuracy that far surpass previous techniques.

The core contribution is an automated deanonymization pipeline powered by LLMs, according to the new study. Instead of relying on structured datasets or hand-engineered features—like earlier attacks on the Netflix Prize dataset—the system works directly on raw, unstructured text.

Given posts, comments, or interview transcripts written under a pseudonym, the pipeline extracts identity-relevant signals, searches for likely matches using semantic embeddings, and then uses higher-level reasoning to verify the most promising candidates while filtering out false positives. The result is a scalable attack that mirrors—and in some cases exceeds—the effectiveness of a dedicated human investigator.

To evaluate their approach, the researchers constructed three datasets with known ground truth. The first links pseudonymous Hacker News users to real-world LinkedIn profiles, relying on cross-platform clues embedded in public text. The second matches users across movie discussion communities on Reddit. The third takes a single Reddit user’s history, splits it into two time-separated profiles, and tests whether the system can reconnect them.

Across all three settings, LLM-based methods dramatically outperformed classical baselines, which often achieved near-zero recall.

The headline numbers are striking. In some experiments, the system achieved up to 68% recall at 90% precision—meaning it correctly identified a substantial portion of targets while keeping false accusations low. Even when matching temporally split Reddit accounts separated by a year, performance remained strong. In contrast, traditional non-LLM approaches struggled to produce meaningful matches. The findings suggest that advances in reasoning and representation learning have transformed deanonymization from a niche, data-hungry attack into a broadly applicable capability.

Keep reading

Palantir CEO Says AI Will Take Power Away From Democratic Voters and Toward Working-Class Men

Palantir CEO Alex Karp has said that artificial intelligence (AI) could shift economic influence away from highly educated voters who tend to support Democrats and toward vocationally trained, working-class men.

In an interview with CNBC, Karp discussed the broader societal impact of artificial intelligence and how it is expected to transform employment.

“This technology disrupts humanities-trained, largely Democratic voters, and makes their economic power less.”

“And increases the economic power of vocationally trained, working-class, often male, uh voters,” Karp said.

“So these disruptions are gonna disrupt every aspect of our society,” he said.

“To make this work, we have to come to an agreement of what it is we’re going to do with the technology; how are we gonna explain to people who are likely gonna have less good, and less interesting jobs.”

Keep reading