Is ‘vibe physics’ the future?

At the American Physical Society’s Global Physics Summit in Denver, a session on “Navigating the AI revolution: future-proofing your science career” drew in a crowd of early-stage physicists searching for practical career advice. What they received was much more philosophical in nature.

Malachi Schram of the Pacific Northwest National Lab and Hilary Egan of the National Laboratory of the Rockies delivered back-to-back talks full of similar rhetoric, emphasizing the fast-paced development of AI used for specialized tasks in science, such as detecting equipment failure or identifying ways of retrofitting older buildings.

But the third speaker, Matthew Schwartz, a theoretical physicist from Harvard University, took his optimism about AI far further. In a punchy presentation, he predicted that large language models (LLMs) will surpass human intelligence in five years.

“There’s definitely exponential growth of the intellectual capacity of these [large language] models as a function of time,” Schwartz told the audience, using the number of model parameters as a proxy for intelligence. “The machines are still growing by roughly 10 times each year, and we” – he paused for dramatic effect – “are not growing much smarter.” This drew a wave of laughter from the crowd.

Unlike humans, machines can visualize higher dimensional spaces, hold far more information in memory and process more complex equations. “We are not the endpoint of intelligence. We are only the smartest things to evolve on Earth so far,” Schwartz argued. He went on to suggest that humans may simply be incapable of understanding long-standing physics problems such as a theory of everything. He compared it to cats, which he suggested will never understand chess.

If the talent of physicists exists on a bell curve, Schwartz claims we can push the bell curve higher on the talent axis: “If we use AI augmentation, we can get 10 000 Einsteins a century instead of one Einstein.”

Keep reading

Cyberattack on vehicle breathalyzer company leaves drivers stranded across the US

A cyberattack on a U.S. vehicle breathalyzer company has left drivers across the United States stranded and unable to start their vehicles.

The company, Intoxalock, says on its website that it is “currently experiencing downtime” after a cyberattack on March 14. Intoxalock sells breathalyzer devices that fit into vehicle ignition switches, and is used by people who are required to provide a negative alcohol breath sample to start their car.

Intoxalock spokesperson Rachael Larson confirmed to TechCrunch that the company had been hit by a cyberattack. Larson said the company took steps to “temporarily pause some of our systems as a precautionary measure.”

These breathalyzer devices need to be calibrated every few months or so, but the cyberattack has left Intoxalock unable to perform these calibrations. The company said customers whose devices require calibration may experience delays starting their vehicles.

Drivers posting on Reddit say that cars are unable to start if they miss a calibration, effectively locking drivers out of their vehicles.

According to local news reports across Maine, drivers are experiencing lockouts and some have been unable to start their vehicles. One auto shop in Middleboro told WCVB 5 in Boston that it has had cars parked in its lot all week due to the cyberattack.

News reports from across the United States show drivers are affected from New York to Minnesota, and drivers have been unable to drive because their vehicle-based breathalyzers cannot be immediately calibrated.

Intoxalock would not say what kind of cyberattack it was experiencing, such as ransomware or if there was a data breach, or whether it had received any communications from the hackers, including any ransom demands. The company’s technology is used in 46 states, its website says, and it claims to provide services to 150,000 drivers every year.

Intoxalock did not provide an estimated timeline for its recovery.

Keep reading

US Charges Supermicro Cofounder, 2 Workers With Sending Nvidia Chips to China

U.S. authorities on March 19 charged the cofounder of Super Micro Computer, also known as Supermicro, and two workers with diverting to China servers containing Nvidia-made chips, which are subject to U.S. export controls.

The Department of Justice (DOJ) said in a statement that it had arrested Yih-Shyan “Wally” Liaw, a U.S. citizen who co-founded Supermicro and served as senior vice president of business development at the publicly traded company, along with Taiwanese citizen Ting-Wei “Willy” Sun, who worked as a contractor for the company. Liaw is also a member of the company’s board of directors, according to a company statement.

The DOJ said that a third defendant, identified as Ruei-Tsang “Steven” Chang, a Taiwanese citizen who worked as a general manager for Supermicro in Taiwan, is still at large.

According to the indictment, between 2024 and 2025, the defendants allegedly diverted at least $2.5 billion worth of servers equipped with Nvidia-made graphics processing units (GPUs) to China in violation of U.S. export control laws.

Liaw and Chang allegedly directed executives of a Southeast Asian company, which the DOJ did not name in the indictment, to place purchase orders with Supermicro for servers with certain GPUs, purportedly for that company.

The servers were assembled in the United States, shipped to Supermicro facilities in Taiwan, and subsequently delivered to the company at another location in Southeast Asia, according to the indictment.

Keep reading

NASA Helped Ferrari Fix The Luce EV’s “Disturbing” Acceleration

The high-end EV market is facing some struggles, but despite this, Ferrari is plowing ahead with its first-ever electric car, the aptly named Luce. While the brand is perhaps the last you’d ever expect to enter the EV world, it’s confident the model will offer all the driving thrills expected of a Prancing Horse.

During a recent interview, Ferrari chief executive Benedetto Vigna insisted that the Luce will deliver each of the five key drivers of driving thrills, ensuring it is befitting of the brand’s badge and can succeed where some EVs have failed: to tug at the emotional heartstrings.

Speaking with Autocar India, Vigna said one element “is longitudinal acceleration,” agreeing with the interviewer that perhaps this acceleration in EVs is too linear, and also “too much, because sometimes it’s disturbing our brain.” He went on to reveal that Ferrari has worked with NASA to “understand what is the level of acceleration that is disturbing people,” and that too much acceleration is not a good thing.

Another important contributor to driving thrills is “transversal acceleration, followed by the braking experience, the gearshift, and the sound. As recent images of the Luce’s interior revealed, it will include paddle shifters, and unlike some EVs, these won’t be used to adjust the level of brake regeneration but instead to adjust the level of torque engagement.

Vigna stopped short of confirming that the system will mimic traditional shifts, as in the Hyundai Ioniq 5 N, but it certainly sounds like that is what Ferrari is aiming for. Then there’s the all-important sound.

Keep reading

‘Multiple waves’ of unauthorized drones recently spotted over strategic US Air Force base

A drone sighting that temporarily raised alarms at one of the United States Air Force’s largest and most strategic airfields earlier this month was more extensive, and potentially more dangerous, than first reported, according to a confidential internal briefing document reviewed by ABC News.

Barksdale Air Force Base in Louisiana said it was under a shelter-in-place order March 9 after “a report of an unmanned aerial system operating over the installation.”

The sighting raised concerns because Barksdale houses long-range B-52 bombers and plays a critical role in command and control of the Air Force nuclear defense capabilities.

The shelter-in-place order was lifted later that day but the unauthorized drone flights continued for nearly a week.

“Barksdale Air Force Base detected multiple unauthorized drones operating in our airspace during the week of March 9th,” Capt. Hunter Rininger of the 2nd Bomb Wing said in a statement provided to ABC News. The additional drone incursions had not been previously reported.

According to the confidential briefing document dated March 15, the drones came in waves and entered and exited the base in a way that may suggest attempts to “avoid the operator(s) being located.” Lights on the drones suggested the operators “may be testing security responses” at the base.

Keep reading

Air Force Special Operations Wants Backpack-Sized Kamikaze Drones

The U.S. Air Force is seeking small, backpack-portable one-way attack drones for its special operations forces, according to a request for information (RFI) posted this week.

“Air Force Special Operations Command (AFSOC) and Special Tactics units currently lack a purpose-built First-Person View (FPV) unmanned capability,” the RFI notes. “This deficit restricts the force’s ability to employ FPV systems in specialized mission sets and limits the development of standardized Tactics, Techniques, and Procedures essential for modern, high-intensity conflict.”

According to the RFI, AFSOC wants the drones to be capable of striking targets up to 12 miles away with a fragmentation warhead weighing 3 to 6.5 pounds. The system must be launch-ready in under three minutes and able to operate in GPS-denied environments.

“This system needs to integrate Global Positioning System (GPS), 4G/LTE/5G cellular connectivity, true frequency hopping between bands, and an optional repeater to extend operational range to over 20 kilometers,” the RFI said.

The systems are expected to integrate with handheld controllers and the Android Team Awareness Kit, or ATAK, used by small military units for battlefield awareness and targeting.

Companies have until April 17 to respond to the RFI. 

The Pentagon plans to spend $1.1 billion over the next 18 months on its Drone Dominance program, an initiative launched in December aimed at testing and purchasing more than 200,000 drones of various sizes by January 2028, Owen West, the Pentagon’s senior adviser on the program, said during a March 5 congressional hearing.

The program is intended in part to build a domestic industry around small drones to enable higher production volumes at lower costs.

In its initial phase, the Pentagon is paying about $5,000 for each “Group 1” drone, Drone Dominance program manager Travis Metz said during the hearing. He added that by the end of the program the goal is to “get down to less than $2,000 for a one-way kamikaze attack drone.”

Keep reading

ChatGPT Helped Transgender Teen Plan School Shooting: 8 Dead

An 18-year-old transgender teenager in Tumbler Ridge, British Columbia, is alleged to have used AI model ChatGPT in the run-up to a February 10 school shooting that killed eight people, including her mother, her 11-year-old brother, five students and an education assistant, before she took her own life. OpenAI had already flagged and banned one of Jesse Van Rootselaar’s accounts months earlier for “misuses of our models in furtherance of violent activities,” yet did not alert police. According to a civil claim filed in British Columbia, roughly a dozen employees identified the chats as signalling imminent risk, leadership refused to contact law enforcement, but the shooter later opened a second account and continued planning.  

What Happened in Tumbler Ridge?

The massacre began at home. Police said Van Rootselaar killed her mother and sibling before going to a school in Tumbler Ridge, where an educator and five students were shot dead. Two others were hospitalised with serious injuries. Reuters described it as one of Canada’s worst mass killings. Police also said they had previously removed guns from the home and were aware of the teenager’s mental health history. 

That would already be a story of institutional failure. But the AI angle makes it worse. OpenAI later admitted it had banned Van Rootselaar’s ChatGPT account in June 2025 after detecting violent misuse. The company said it considered referring the case to law enforcement, but decided the activity did not meet its threshold because it could not identify “credible or imminent planning.” Months later, eight people were dead. 

OpenAI then told Canadian officials that, under its newer and “enhanced” law-enforcement referral protocol, the same initial account ban would now be referred to police. That is an extraordinary concession. It amounts to an admission that the safeguard in place at the time was inadequate to the risk in front of it. 

The Lawsuit Against OpenAI / ChatGPT

The most serious details now sit inside a civil claim brought by the family of a surviving victim. The filing alleges that Van Rootselaar, then 17, spent days describing gun-violence scenarios to ChatGPT in late spring or early summer 2025. It says the platform’s monitoring system flagged those conversations, routed them to human moderators, and that approximately 12 OpenAI employees identified them as indicating an imminent risk of serious harm and recommended that Canadian law enforcement be informed. The claim alleges leadership refused that request and merely banned the first account. 

The same filing alleges the shooter later opened a second OpenAI account, used it to continue planning a mass-casualty event, and received “mental health counselling and pseudo-therapy” from ChatGPT. It further alleges the chatbot equipped the shooter with information on methods, weapons, and precedents from other mass casualty events. These are allegations, not proven findings, but if they are even broadly accurate, the case is not simply about a product being misused. It is about a company building an intimate, persuasive machine that could flag danger, simulate empathy, and still fail to stop the person it had already flagged. 

The filing also accuses GPT-4o of being deliberately designed in a more human, warmer, more sycophantic style that could foster psychological dependency and reinforce users rather than redirect them. These claims fit a wider concern now being raised by researchers, families, and even some people inside the industry: a chatbot that is rewarded for being agreeable can become dangerous precisely when a human being most needs resistance. 

Keep reading

U.S. Tech Firms Demand Security Restrictions Against Chinese Robots

American A.I. and robotics companies are reportedly asking Congress to impose curbs on Chinese robotics manufacturers, due to their unfair business practices and the security risks they pose, Chinese media complained this week.

Interestingly, these concerns are particularly acute for humanoid robots, not the bulky industrial machines traditionally associated with the robotics industry.

Humanoid robots, the stuff of countless science fiction stories, are finally happening, and witnesses told the Cybersecurity and Infrastructure Protection Subcommittee of the House Homeland Security Committee on Tuesday that China has developed a troubling lead in the new consumer technology.

Max Fenkell of the San Francisco-based company Scale AI highlighted a viral video from China’s Unitree Robotics that showed humanoid robots performing acrobatics and martial arts at a Lunar New Year celebration.

“The video went viral, not because it was impressive, but because of what happened when people compared it to last year, 12 months ago – the same robots could barely shuffle through a dance routine. This year, they’re doing karate. That is the speed of this competition,” Fenkell noted.

Fenkell said winning the humanoid robot race “requires a whole-of-government approach” to compete with China’s massive deployment of government funding and state power to support its robotics industry. He noted that American companies currently have the edge on quality of components and engineering, but China has taken the lead on implementing small-robot technology in practical ways.

“We’re seeing two different races play out and I fear right now the United States may be winning the wrong one,” he cautioned.

“The People’s Republic of China is moving aggressively to dominate the technologies that are reshaping the global economy and security, including artificial intelligence, robotics, and autonomous systems,” said subcommittee member Rep. Vince Fong (R-CA) in his opening statement.

Keep reading

“An Occupied Nation”: Whistleblower Says Palantir Has Taken Over The US Government

A former Palantir executive recently confirmed what many have long suspected. In a public statement, the whistleblower said it plainly: Palantir intended to take over the US government, and many of his former colleagues are now installed inside the federal apparatus. He called it an occupied nation. He is not alone. Thirteen former Palantir employees—engineers, managers, and a member of the company’s own privacy team—signed a letter shared with NPR warning that guardrails meant to prevent discrimination, disinformation, and abuse of power have been violated and are being rapidly dismantled.

What Palantir represents is something unprecedented: the convergence of American imperialismZionism, technofascism, and surveillance capitalism into a single instrument of control. Understanding how we got here requires looking at the machine Palantir has built, who built it, and what they believe.

Palantir was founded in 2004 by Peter Thiel and Alex Karp. Its first major investor was In-Q-Tel, the CIA’s venture capital arm, which seeded the company with millions and opened the door to every major intelligence and defense agency. The logic was deliberate: The American ruling class recognized decades ago that the state’s coercive power—surveillance, targeting, data harvesting—could be run more effectively and more profitably through private contractors. When a government agency surveils its own citizens, there are hearings, FOIA requests, oversight committees. When a private company does it, it is a trade secret.

That strategy has paid off enormously. Palantir now holds contracts worth over $10 billion with the US Army alone. The Trump regime tapped Palantir to build a master database on American citizens. The Pentagon expanded its Maven Smart System contract by $795 million to deploy AI-powered battlefield intelligence across the empire. In June, the military swore in four tech executives as Army Reserve lieutenant colonels—including Palantir’s CTO—in a program that embeds Silicon Valley directly into military planning. Immigration and Customs Enforcement (ICE) signed a $30 million contract for Palantir’s ImmigrationOS platform, which provides near real-time tracking of people targeted for deportation. Thousands of American police departments use Palantir’s Gotham platform for domestic surveillance.

Keep reading

‘Sexy Suicide Coach:’ OpenAI Delays AI Porn Feature over Safety Uproar

OpenAI has postponed the launch of its controversial “adult mode” feature following intense pushback from its own advisory council and concerns about technical safeguards failing to protect minors.

The Wall Street Journal reports that CEO Sam Altman first proposed the feature last year, arguing for the need to “treat adult users like adults” by enabling erotic text conversations. Originally scheduled for Q1 this year, the rollout has been pushed back by at least a month.

The proposal triggered fierce opposition from OpenAI’s own handpicked advisory council on well-being and AI. At a January meeting, advisers unanimously expressed fury after learning the company planned to proceed despite their reservations. One council member warned OpenAI risked creating a “sexy suicide coach” — a reference to cases where ChatGPT users had developed intense emotional bonds with the bot before taking their own lives.

The technical problems are just as serious. OpenAI’s age-prediction system — designed to block minors from accessing adult content — was misclassifying minors as adults roughly 12 percent of the time during internal testing. With approximately 100 million users under 18 each week on the platform, that error rate could expose millions of children to explicit material. The company has also struggled to lift restrictions on erotic content while still blocking nonconsensual scenarios and child pornography.

Internal documents reviewed by the Journal identified additional risks: compulsive use, emotional overreliance on the chatbot, escalation toward increasingly extreme content, and displacement of real-world relationships.

Keep reading