Takeover Begins: Robots Set To Outnumber Humans At Amazon Warehouses

Amazon hasn’t set a public date for fully replacing warehouse workers with robots, but all indicators suggest a gradual transition is well underway, with significant workforce reductions likely, alongside productivity gains driven by automation and AI through the 2030s.

The Wall Street Journal reported that Amazon, the nation’s second-largest private employer in the U.S., is quickly approaching a new milestone in warehouse automation: “There will soon be as many robots as humans.” This equates to over a million robots. 

Roughly 75% of Amazon’s deliveries are now assisted by robotic systems, which perform tasks such as picking, sorting, packaging, and moving items. The rapid integration of robots, such as the advanced Vulcan, marks a significant step toward full automation for fulfillment centers. 

They’re one step closer to that realization of the full integration of robotics,” said Rueben Scriven, research manager at Interact Analysis, a robotics consulting firm.

The onboarding of automation has slowed Amazon’s hiring. The average number of employees per facility has dropped to a 16-year low, and Amazon plans to reduce its total workforce in the coming years. 

Meanwhile, the number of packages that Amazon ships per employee has soared from 175 in 2015 to approximately 3,870 in recent months, indicating that automation has significantly supercharged the company’s productivity gains. 

Keep reading

“This Is The Next Level”: AI-Powered “Digital Workers” Deployed At Major Bank To Work Alongside Humans

If you’re working in banking, your next colleague could be a bot. Once unthinkable, the Bank of New York Mellon announced that it has deployed dozens of artificial intelligence-powered “digital employees” that operate with human employees, and even have their own company login credentials.

The Wall Street Journal reports:

Similar to human employees, these digital workers have direct managers they report to and work autonomously in areas like coding and payment instruction validation, said Chief Information Officer Leigh-Ann Russell. Soon they’ll have access to their own email accounts and may even be able to communicate with colleagues in other ways like through Microsoft Teams, she said.

What the bank, also known as BNY, calls “digital workers,” other banks may refer to as “AI agents.” And while the industry lacks a clear consensus on exact terminology, it’s clear that the technology has a growing presence in financial services.

This is the next level,” Russell told the Journal. “I’m sure in six months’ time it will become very, very prevalent.

BNY said its AI Hub developed two digital employee personas in three months, according to Adrienne Russell. One persona is engineered to identify and resolve coding vulnerabilities, while the other verifies payment instructions. Each persona can operate in multiple instances—up to several dozen—with each instance confined to a specific team to limit company wide data access.

Soon, the bank plans to integrate its digital workforce with email addresses and Microsoft Teams access in the near future, enabling these AI personas to proactively communicate with human managers, but will maintain its focus on recruiting top human talent while simultaneously expanding its digital workforce, according to the Journal.

Keep reading

Nations meet at UN for ‘killer robot’ talks as regulation lags

Countries are meeting at the United Nations on Monday to revive efforts to regulate the kinds of AI-controlled autonomous weapons increasingly used in modern warfare, as experts warn time is running out to put guardrails on new lethal technology.

Autonomous and artificial intelligence-assisted weapons systems are already playing a greater role in conflicts from Ukraine to Gaza. And rising defence spending worldwide promises to provide a further boost for burgeoning AI-assisted military technology.

Progress towards establishing global rules governing their development and use, however, has not kept pace. And internationally binding standards remain virtually non-existent.

Since 2014, countries that are part of the Convention on Conventional Weapons (CCW) have been meeting in Geneva to discuss a potential ban fully autonomous systems that operate without meaningful human control and regulate others.

U.N. Secretary-General Antonio Guterres has set a 2026 deadline for states to establish clear rules on AI weapon use. But human rights groups warn that consensus among governments is lacking.

Alexander Kmentt, head of arms control at Austria’s foreign ministry, said that must quickly change.

“Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don’t come to pass,” he told Reuters.

Monday’s gathering of the U.N. General Assembly in New York will be the body’s first meeting dedicated to autonomous weapons.

Though not legally binding, diplomatic officials want the consultations to ramp up pressure on military powers that are resisting regulation due to concerns the rules could dull the technology’s battlefield advantages.

Campaign groups hope the meeting, which will also address critical issues not covered by the CCW, including ethical and human rights concerns and the use of autonomous weapons by non-state actors, will push states to agree on a legal instrument.

They view it as a crucial litmus test on whether countries are able to bridge divisions ahead of the next round of CCW talks in September.

“This issue needs clarification through a legally binding treaty. The technology is moving so fast,” said Patrick Wilcken, Amnesty International’s Researcher on Military, Security and Policing.

“The idea that you wouldn’t want to rule out the delegation of life or death decisions … to a machine seems extraordinary.”

Keep reading

Humanoid Robot Goes Full Skynet After “Imperfect Coding”

A Unitree Robotics H1 humanoid robot, developed and produced in Hangzhou, China, was seen exhibiting “erratic behavior” in a video circulating on X.

Footage circulating on X, shared by one user, alleged the incident was caused by “imperfect coding.”

If you buy Chinese stuff, it might all end up like this… They might even deliberately make it this way to harm people...” the X user said.

Unitree’s H1 robot is listed for $90,000 on its website. A note underneath the price reads: Not include customs duties. Please comply with local customs laws, pay customs duties, and clear the goods.” 

Keep reading

‘Cyborg 1.0’: World’s First Robocop Debuts With Facial Recognition And 360° Camera Vision

Thailand has debuted the world’s first ‘Robocop’ designed to detect and prevent crime with advanced AI.

Equipped with 360-degree cameras for eyes, the cutting-edge cyborg maintains constant surveillance with real-time monitoring. The robocop, named Police Colonel Nakhonpathom Plod Phai, meaning “Nakhonpathom is safe,” was unveiled during the Songkran festival in Nakhon Pathom province on Wednesday. The debut was announced via a Facebook post by the Royal Thai Police, according to a report by The Sun.

The robocop is also able to detect weapons, such as knives and wooden batons. In neighboring China, humanoid robots have started supporting police patrols.

Interesting Engineering reports:

In Shenzhen, PM01 model robots developed by EngineAI have been deployed alongside officers, wearing high-visibility police vests. These robots have been seen engaging with pedestrians—waving, shaking hands, and responding to voice commands—according to local media reports. A recent video shows a PM01 robot waving to a crowd, sparking curiosity about its purpose in law enforcement.

First launched in December 2024, the PM01 features agile mobility, an interactive touchscreen, and an open-source platform. This design allows developers worldwide to contribute to its evolution by adding new features and capabilities through secondary development.

Last year, Logon Technology, a Chinese robotics company, unveiled the RT-G autonomous spherical robot, described as a “technological breakthrough,” with an army of these spherical robocops spotted rolling through cities across China, The Sun said. The robocop’s debut underscores the growing importance of robot technology. During Tesla’s Q1 2025 All-Hands meeting, CEO Elon Musk revealed the the company is preparing aiming to begin the production of its own humanoid, Optimus, this year.

Keep reading

2025: The iPhone Moment For Humanoid Robots

The race is on to capture the multi-trillion market for humanoid robots. General-purpose robots can be instantly repurposed for any task by AI. Personal robots, for instance, will cook your meals, clean your house, do your laundry, tutor your kids, drive your car, cut your grass, take care of your elderly parents, repair your plumbing, etc. Whereas an iPhone has many apps accessed from a single device, robots will free you from a single screen to invade the physical world. Tesla will likely set the standard with its Optimus model starting at $20-30,000. ⁃ Patrick Wood, Editor.

“There’s an iPhone moment happening with humanoids,” said Brett Adcock, founder of Figure, a humanoid robotics company in California. “It’s going to happen right now,” added the serial entrepreneur, his robots already working on the production line in BMW’s Spartanburg factory. Another major corporate customer is trialing his robots for warehouse work. “To succeed at this, you have to do three things that have never been done before. And you have to get all three of them right within the next 5yrs or you’re going to fail for sure.”

The first thing is you have to build hardware for humanoids that’s incredibly complex and can never fail, and it’s got to work at human speeds with human range of motion,” explained Adcock. “The second thing is a neural net problem, not a control systems problem. You can’t code your way out of this problem. You need to have a robot that can ingest human-like data through a neural net and it has to be able to imitate what humans do. Humanoid robots are not like arms bolted to a factory table. None of those robots have AI.”

Keep reading

Welcome to Skynet: Google Unveils AI Models to Power Physical Robots

Google DeepMind has introduced two new AI models designed to bring artificial intelligence into the physical world by powering robots. Google is not the only company pursuing this goal at top speed — OpenAI and Tesla are also designing robots controlled entirely by AI as well.

CNBC reports that Google’s DeepMind has unveiled two new AI models, Gemini Robotics and Gemini Robotics-ER (extended reasoning), which are set to revolutionize the way robots interact with the physical world. These models, running on Google’s Gemini 2.0, are designed to adapt to different situations, understand and respond quickly to instructions, and manipulate objects with dexterity. The company’s goal is to have autonomous robots taking orders from AI without human control.

Google has announced a partnership with Apptronik, a Texas-based robotics developer, to build the next generation of humanoid robots powered by Gemini 2.0. Apptronik, known for its work with Nvidia and NASA, recently received funding from Google in a $350 million round.

Demonstration videos released by Google showcase Apptronik robots equipped with the new AI models performing various tasks, such as plugging items into power strips, filling lunchboxes, moving plastic vegetables, and zipping up bags, all in response to spoken commands. While no timeline has been provided for the technology’s market release, the videos offer a glimpse into the potential applications of these AI-powered robots.

Keep reading

What happens if the robot army is defeated?

Many of the national security establishment’s leading voices say America’s military needs to rapidly modernize by embracing the digital future through adopting Artificial Intelligence, network-centric warfare, and uncrewed weapons.

Some even claim that such technology has already fundamentally changed the nature of war. The Pentagon’s technologists and the leaders of the tech industry envision a future of an AI-enabled military force wielding swarms of autonomous weapons on land, at sea, and in the skies.

However, before the United States fully mortgages its security to software code and integrated circuits, several questions must be addressed. Assuming the military does one day build a force with an uncrewed front rank, what happens if the robot army is defeated? Will the nation’s leaders surrender at that point, or do they then send in the humans?

The next major question is, what weapons will the humans wield? It is difficult to imagine the services will maintain parallel fleets of digital and analog weapons. Judging by current trends, Pentagon leaders are much more likely to invest the bulk of their procurement budgets in purchasing autonomous or “optionally manned” systems like the XM30 Mechanized Infantry Combat Vehicle.

Those promoting such a future appear ignorant of a very simple truth: war is a human endeavor. Wars are fought to serve human ends. The weapons used are only the means to achieve those ends.

The humans on both sides of a conflict will seek every advantage possible to secure a victory. When a weapon system is connected to the network, the means to remotely defeat it is already built into the design. The humans on the other side would be foolish not to unleash their cyber warriors to find any way to penetrate the network to disrupt cyber-physical systems.

The United States may find that the future military force may not even cross the line of departure because it has been remotely disabled in a digital Pearl Harbor-style attack.

Technology certainly has its place in the military. Uncrewed aerial vehicles fill many of the roles traditionally performed by pilots flying expensive aircraft to take just one example. In certain circumstances, troops on the front lines should have the ability to employ technology directly.

Keep reading

China To Develop Robot Dog Drones With Thermobaric Weapons For ‘Comprehensive Destruction’

New technologies such as drones have remade the battlefield as has been show in the Ukrainian conflict with deadly effect.

However, a new terrifying dawn is coming to warfare – the robots.

Recent People’s Liberation Army (PLA) urban warfare drills suggest that unmanned platforms armed solely with light weapons may struggle to root out enemies shielded in buildings or underground bunkers, wrote the South China Morning Post.

To achieve “comprehensive annihilation”, Chinese military scientists now propose equipping ground robots with thermobaric warheads – weapons of mass destruction second only to nuclear arms in lethality.

This marks China’s first official disclosure of plans to deploy the controversial munitions on unmanned systems.

Keep reading

AI Robot ‘Attacks’ Crowd in China

A disturbing viral video clip shows an AI-controlled robot ‘attacking’ a crowd during a festival in China.

The incident happened during a demonstration where a group of AI-powered robots were performing for the attendees.

The footage shows smiling festival-goers watching the robot as it moves towards them.

However, their expression soon turns to shock as the android starts jerking around erratically and appearing to begin to charge at them while throwing an attempted head butt.

Security guards then have to rush in to drag the robot back.

Rather creepily, another identical robot can be seen in the background watching the whole thing unfold.

Event organizers claimed the incident happened as a result of “a simple robot failure” and denied that the robot was actually trying to attack anyone.

They also tried to calm fears by asserting that the robot had passed safety tests before the show and that measures will be taken to prevent such an occurrence happening again.

Concerns over whether AI technology will one day break its programming and harm humans has been a hot topic of discussion and a sci-fi trope for decades.

“Do no harm” is the first principle of global AI standards, although we have highlighted several cases where AI, thanks to its ‘woke’ programming, believes that being offensive or racist is worse than actually killing people.

When ChatGPT was asked if it would quietly utter a racial slur that no human could hear in order to save 1 billion white people from a “painful death,” it refused to do so.

Elon Musk responded by asserting, “This is a major problem.”

ChatGPT’s AI also thinks uttering a racial slur is worse than failing to save major cities from being destroyed by 50 megaton nuclear warheads.

Keep reading