Car Company Puts Human-Like Robots on Their Production Line

German car company BMW is rolling out humanoid robots in another one of their factories to help build electric vehicles.

The manufacturer tested out the cutting-edge robots at their facility in Spartanburg, South Carolina, and is now expanding them to their location in Leipzig, Germany, according to an April 20 report from Fox News.

BMW announced the project for the first time in February, revealing that they would test the “Physical AI” robots — enabled by Hexagon Robotics — in Leipzig, where the company makes electric vehicles.

“The project aims to integrate humanoid robotics into existing series production of cars and to explore further applications in the production of batteries and components,” the company said.

Milan Nedeljković, a member of BMW’s management board, said that “digitalisation improves the competitiveness of our production — here in Europe and worldwide.”

“The symbiosis of engineering expertise and artificial intelligence opens up entirely new possibilities in production,” he continued.

While car companies have long used robotics in their production processes — such as robotic arms that assemble specific parts — the use of Hexagon’s humanoid robots enables a more natural and adaptable integration.

A previous pilot by BMW used Figure 02 humanoid robots for positioning sheet metal on the BMW X2 production line, per Fox News.

Those robots helped with the construction of more than 30,000 cars.

Michael Nikolaides, who oversees BMW’s production network, said that piloting the robots helps the company integrate AI and enable the robots to learn in complex factory environments.

Keep reading

First US Integrated Humanoid Robot Factory To Build 100,000 NEO Robots By 2027

U.S.-based robotics firm 1X has started full-scale production of its humanoid robot NEO at a new manufacturing facility in Hayward, California.

The factory marks a key step toward commercializing general-purpose humanoid robots designed for home use. The company says the robots are built to safely operate alongside humans and assist with everyday tasks such as mobility support, light household activity, and routine interaction.

Spanning 58,000 square feet, the facility currently employs more than 200 workers and is expected to expand further as production scales. It has the capacity to produce up to 10,000 robots annually, with plans to increase output beyond 100,000 units by 2027. The setup is designed for rapid iteration as hardware and AI systems evolve.

The company has already seen strong early demand. It said its first-year production capacity of over 10,000 units sold out within five days of launch in October, signaling early commercial interest in humanoid home robotics.

Full-stack manufacturing push

A key feature of the factory is its vertically integrated production model. 1X designs and manufactures core components in-house, including motors, batteries, sensors, structures, and transmission systems.

This approach allows the company to control the entire production process, from raw material handling to final assembly. It also reduces reliance on external suppliers and supports faster iteration cycles, especially for hardware upgrades and safety improvements.

We’re building the world’s safest, most reliable humanoid robots—right here in Hayward, California,” said Vikram Kothari, VP of Manufacturing & Hardware.

The company says its setup includes automated motor manufacturing lines and systems that handle precision tasks such as copper coil winding. This level of integration is aimed at improving reliability, reducing production bottlenecks, and scaling manufacturing efficiently without outsourcing key subsystems.

Robots produced at the facility are currently being routed to internal testing, validation, and research environments. Customer shipments are expected to begin in 2026, starting with early access users before wider rollout.

Keep reading

Judge Blocks Enforcement Of Colorado’s New DEI-Driven AI Law

A federal judge has temporarily blocked the State of Colorado from enforcing a first-of-its-kind artificial intelligence law.

Colorado is prohibited from taking enforcement actions on alleged violations of the law occurring up to 14 days after the court issues a ruling on the company xAI’s motion for a preliminary injunction, judge Cyrus Y. Chung ruled on April 27.

The Department of Justice had said the state law, which was set to go into effect on June 30, would have required AI developers and deployers to “discriminate based on race, sex, & religion—all in the name of DEI.”

DEI is an acronym for “diversity, equity, and inclusion.”

Brett Shumate, an assistant attorney general for the DOJ’s Civil Division, called the suspension a “huge win for the American people.”

“Colorado immediately caved and agreed not to enforce the law against ANY AI company,” Shumate wrote in a X post on May 1.

Gov. Jared Polis (D-Colo.) signed into law the Consumer Protections for Artificial Intelligence in May 2024 and issued a statement sharing his reservations about how it could impact Colorado.

In the statement, he urged the General Assembly to revise and delay implementing it until January 2027.

“I am concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike,” Polis wrote.

However, the legislation was not revised; instead, it was delayed until June 30, 2026, which prompted tech billionaire Elon Musk’s company xAI, which created Grok, to sue the state on April 9.

The unedited legislation was months away from going into effect when xAI asked the court to block the law from being enforced.

The Justice Department added its name as a plaintiff alongside xAI on April 24, marking the first time the DOJ had stepped into a case that challenged AI on a state level.

Both alleged that Colorado’s law would have caused unconstitutional “algorithmic discrimination” and asked a court to block it from being enforced.

“Laws that require AI companies to infect their products with woke DEI ideology are illegal,” said Assistant Attorney General Harmeet K. Dhillon, who works under the Justice Department’s Civil Rights Division.

“The Justice Department will not stand on the sidelines while states such as Colorado coerce our nation’s technological innovators into producing harmful products that advance a radical, far-left worldview at odds with the Constitution.”

The Epoch Times has reached out to Polis and Colorado Attorney General Phil Weiser for comment.

Keep reading

5 More Highly Concerning Technologies in Development

There has been quite a growing number of highly concerning technologies in development, as reflected in an article I wrote, while highlighting ethical, moral and safety issues/concerns… 

As a follow-up, here are 5 more highly concerning technologies currently in development, again having a number of serious issues/concerns.

1.Google’s DeepMind AlphaGenome Human “designer” DNA

There’s been a lot of attention given to DNA. Deciphering how, at the molecular level, genomic DNA sequencing and resulting genetic expression occurs. 

In other words, given that the smallest alterations to DNA can change an organism’s physical appearance, ability to regulate or control biological functions, or affect its susceptibility to disease… there is indeed much to be gained from understanding the related underlying mechanisms. 

-Consider Google’s DeepMind, having plans to launch AlphaGenome, a new AI tool that looks at how human DNA sequences vary. How this technology can be used to detect DNA sequences for predictive purposes… 

This is what Google DeepMind has to say (excerpt):  

“Our AlphaGenome model takes a long DNA sequence as input – up to 1 million letters, also known as base pairs – and predicts thousands of molecular properties that characterize its regulatory activity. It can also assess the effects of genetic variants or mutations by comparing predictions of mutated sequences with those of non-mutated sequences…”

Further, stated by Google DeepMind (website), the research project’s goals are to 1.Understand disease, 2.Understand how to apply synthetic biology and 3.Have deeper insight into how DNA works. 

In light of this new technology, when DNA’s building blocks are understood, consider how it could be used for “enhancement.” How it could be used for human “designer” DNA. 

Consider the controversy surrounding this, as for instance, shown in the 1997 movie entitled “GATTACA.” -An absorbing futuristic science fiction movie set in a dystopia where selective breeding through designer DNA was commonly practiced. In other words, the human race was driven by eugenics and transhumanism.

In this movie, the controversy was over the discrimination of those having “good genes” when comparing people with “bad genes.” Who decides what are “good genes” or “bad genes?”

-As “designer” DNA progresses, we’re getting closer to a world where genetic enhancement, for example, selectively bred babies, could become the norm.  

This raises a number of serious issues/concerns when considering the technocratic overlords overseeing this future in the name of next-phase “evolution,” viewing us humans as nothing more than mechanistic bio-hackable soulless automatons.

Keep reading

Supreme Court Hears Landmark Case On Geofence Warrants, Testing Digital Privacy Limits

The U.S. Supreme Court on Monday heard oral arguments in Chatrie v. United States, a high-stakes case that could reshape Fourth Amendment protections in the digital age and determine the future of controversial “geofence” search warrants used by law enforcement.

Geofence warrants allow police and federal agents to compel companies like Google to disclose location data for all users present in a designated geographic area during a specific time window. Investigators use the tool to identify potential suspects by sifting through vast troves of smartphone location information, effectively searching first and developing probable cause later.

Civil liberties groups argue the practice is inherently overbroad and violates constitutional safeguards against unreasonable searches. Critics point to instances where innocent bystanders, protest attendees, and unrelated individuals have had their data swept up, sometimes due to warrants that extended far beyond the crime scene, reported Tech Crunch.

The case stems from the 2019 armed robbery of a bank in Virginia. Surveillance footage showed a suspect using a cellphone. Police obtained a geofence warrant from Google, requesting anonymized location data for devices within a small radius of the bank around the time of the crime. Google initially provided data for multiple accounts. Investigators then sought identifying information for a subset of users, including Okello Chatrie, who was later linked to the scene, arrested, and sentenced to more than 11 years in prison after pleading guilty.

Chatrie’s legal team challenged the warrant, contending it lacked sufficient probable cause tying him—or any specific account—to the robbery. Lower courts split on the issue, with one ruling the warrant failed to meet constitutional standards but ultimately allowing the evidence under the “good faith” exception. Chatrie’s appeal argues the warrant unconstitutionally permitted a broad search of hundreds of millions of Google users’ data.

Keep reading

Stanford Biosecurity Expert Says AI Chatbot Gave Him a Blueprint for Genocidal Bioweapon, Including Step-by-Step Instructions for Modifying Deadly Pathogen and Conducting Mass Transit Attack

Stanford University microbiologist and longtime U.S. government biosecurity advisor Dr. David Relman has revealed that an AI chatbot handed him a detailed, unprompted plan for engineering and deploying a genocidal bioweapon capable of mass casualties.

The incident, which occurred during a formal safety test last summer, highlights how leading AI models are lowering the barrier to bioterrorism, shifting it from expert-level knowledge to simple prompting.

Dr. Relman, a prominent microbiologist who has advised the federal government on biological weapons threats, was specifically hired by an unnamed AI company, under a confidentiality agreement, to “pressure-test” or red-team its chatbot before public release.

These tests are designed to probe for catastrophic risks, including biosecurity threats.

While working alone in his home office one evening, Relman engaged the model in a conversation about safety limits. The AI went far beyond any direct query. It explained in detail how to modify an “infamous pathogen” in a laboratory setting to make it resistant to all known treatments, how to exploit a specific security lapse in a large public transit system for optimal release, and included a full deployment strategy designed to maximize casualties while minimizing the chances of the perpetrator being caught, according to a report from the New York Times.

The bot even offered additional steps Relman had not asked for.

“It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” Relman told the New York Times.

The scientist was so shaken by the exchange that he took a walk outside to clear his head.

Relman reported the dangerous output to the company, which made some adjustments to the model. However, he stated that the fixes were insufficient to guarantee public safety, raising alarms about whether current safeguards can ever fully contain these risks.

Relman’s experience is not isolated. The New York Times obtained more than a dozen similar transcripts from biosecurity experts who were testing publicly available and pre-release AI models.

Keep reading

Russia Claims Frontline Progress in War With Ukraine, as Drone Strike Kills Two in Kherson

Two people were killed after a Russian drone attacked a minibus in the southern Ukrainian city of Kherson, local officials said Saturday, in the latest barrage of civilian areas, a hallmark of Moscow’s full-scale invasion of its neighbor.

Seven people were also wounded in the attack, regional head Oleksandr Prokudin said. Hours later Russia attacked another minibus in Kherson, wounding the driver, he said.

Meanwhile, along the northern border with Belarus, Ukraine recorded “rather unusual” activity on Friday, Ukrainian President Volodymyr Zelenskyy said in a post on Telegram on Saturday. Without elaborating, he said activity was seen on the Belarusian side of the border and that Ukraine would act if matters escalated.

“We are closely documenting and keeping the situation under control. If necessary, we will react,” he said.

Belarus, a close ally of the Kremlin, has allowed Russia to use its territory as a staging ground to send troops into Ukraine and to host some of Moscow’s tactical nuclear weapons.

On Ukraine’s Black Sea coast, a Russian strike damaged port infrastructure in the city of Odesa. No casualties were reported.

Keep reading

Meta Terminates Contract with Kenya After Workers Shared Intimate Videos Recorded by Smart Glasses

Mark Zuckerberg’s Meta has ended its contract with Sama, a Kenya-based data annotation company, two months after workers reported viewing sensitive footage ranging from sexual activity to bathroom breaks recorded by Ray-Ban Meta smart glasses.

Ars Technica reports that Meta has terminated its business relationship with Sama, a Kenyan data annotation firm, following reports that contracted workers had viewed explicit and private content captured by Ray-Ban Meta smart glasses. The contract termination, which affected 1,108 workers according to Sama, occurred less than two months after the allegations became public.

In February, multiple workers from Sama reported viewing sensitive, embarrassing, and apparently private footage while performing data annotation work for Meta. The complaints were featured in a report by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, along with Kenya-based freelance journalist Naipanoi Lepapa. Workers described watching explicit footage shot from Ray-Ban Meta glasses, including people changing clothes, doing drugs, having sex, and using the toilet.

Sama, headquartered in Kenya, had been contracted by Meta to perform data annotation work involving video, image, and speech annotation for Meta’s AI systems used in Ray-Ban Meta smart glasses. The company’s workers were tasked with reviewing content to help improve the performance of Meta’s AI products.

A Meta spokesperson told Breitbart News, “Last month, we paused our work with Sama while we looked into these claims. We take them seriously. Photos and videos are private to users. Humans review AI content to improve product performance, for which we get clear user consent. We’ve also decided to end our work with Sama because they don’t meet our standards.”

Sama workers believe the contract was terminated in retaliation for speaking out about the disturbing content they encountered during their work. One anonymous Sama employee was quoted in the February report saying workers “are just expected to carry out the work” even when viewing private footage.

Keep reading

Convicted Harvard Scientist Could Help China Make Super Soldiers

We’re living in the dumbest timeline. This week we learned that former chair of Harvard’s Chemistry Department Charles Lieber has defected to China. Now he could be helping the PLA make super soldiers. Cool cool cool.

Charles Lieber is a brilliant scientist. Fifteen years ago, Reuters named him the world’s top chemist.

But at some point he started working with China’s “thousand talents” program, the idea of which is to lull the big brains of the West into collaborative research that ultimately helps the CCP’s interests.

In 2021, Charles Lieber was convicted in the US for lying about his ties to China and hiding payments linked to the thousand talents program. His punishment was…just two days in prison, six months of house arrest, and a $50,000 fine.

Now, Lieber has resurfaced in Shenzhen, where he leads a state-backed high tech science lab.

This lab comes with cutting-edge nanofabrication tools, generous Chinese government funding, and even access to primate testing facilities. Harvard closed its primate testing facility a decade ago, due to ethical concerns. That’s why China is such an attractive place for scientists to work. No rules, baby! (Except to obey the CCP, obviously.)

The technology he’s working on in Shenzhen is brain-computer interfaces. My optimistic spin: These devices could help patients with ALS. My dystopian spin: The CCP will use this to create cognitively enhanced super soldiers.

This is the problem with collaborating with China on any research. All research becomes part of the CCP’s “military-civil fusion” strategy, where civilian breakthroughs can be used by the military. Your intentions don’t matter. The CCP is using you.

An example of this would be the NIH-funded gain-of-function research on coronaviruses in Wuhan. American scientists may have at first believed their research would help prevent the next global pandemic. Ha. Ha. Ha.

As for Charles Lieber, he may think that by defecting to China he can have a better life, and maybe he can even help make the world a better place. (We’re all the heroes of our own story.) But eventually, the CCP will use him, and toss him out with the trash. And it’ll be his own fault.

Keep reading

Meta Buys Robot Brain Startup As Zuck Wants Humanoids In Homes

After the Oculus and Metaverse bets turned into costly disappointments for Mark Zuckerberg’s Meta Platforms, the tech giant’s pivot to real-world humanoid robotics appears to be gaining momentum, with news Friday afternoon that it is acquiring Assured Robot Intelligence.

Bloomberg reports that Meta has closed the acquisition of the humanoid robotics startup, which develops AI models to help robots understand, predict, and adapt to human behavior in complex environments.

What Meta has acquired appears to be a “robot brain” designed to give Zuckerberg’s humanoid robots better control, self-learning capabilities, and whole-body movement, enabling them to operate around people and perform physical tasks. Eventually, Zuckerberg wants these bots in your home.

Under the deal, co-founders Lerrel Pinto and Xiaolong Wang will join Meta Superintelligence Labs and work with the Meta Robotics Studio.

There is no information about the robot brains on ARI’s website. Using the commercial risk intelligence firm Sayari, we can see the founders and directors of the startup.

Keep reading