Avoid Hertz’s AI-Powered Rental Car Scanners

Back in late 2023, Amazon unveiled an AI-powered scanner that drivers pass through at the end of each shift. The AI system detects even the most minor anomalies, such as tire deformities, undercarriage wear, and minor scratches or dents on delivery vans. A nearly identical AI scanner has now appeared in the parking lots of rental car giant Hertz

Auto blog The Drive reports that an increasing number of Hertz locations across the country are adopting AI scanners to inspect returned rental cars. The outlet warns that consumers may want to steer clear of these scanners, as they risk being hit with steep fees for even the smallest blemishes.

“A customer rents a vehicle from Hertz, and upon return, the scanner finds an insignificant blemish on the vehicle, leading to unexpectedly high charges that are often doubled through tacked-on administrative fees,” The Drive explained. 

There have been several reports—most notably at Hertz’s Atlanta airport location—of frustrated customers being charged for “insignificant blemishes” after passing through AI scanners. These systems are still relatively new for the rental industry. The Drive has warned readers to take precautions when renting vehicles and offered several tips on how to protect themselves from unexpected charges… 

Keep reading

MIT Study Finds ChatGPT Can Harm Critical Thinking Over Time

A recent study by the Media Lab at MIT found that prolonged use of ChatGPT, a large language model (LLM) chatbot, can have a harmful impact on the cognitive abilities of its users.

Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels, noted the report, whose main author was research scientist Nataliya Kos’myna.

These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning, it added.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in six to eight months, there will be some policymaker who decides, ‘let’s do GPT kindergarten,’” Kos’myna told Time magazine. “I think that would be absolutely bad and detrimental. Developing brains are at the highest risk.”

For the research, 54 subjects, aged 18 to 39, were divided into three groups to write several SAT essays. One group could use ChatGPT; the second, Google search; and the third, no tools at all. An EEG was used to measure the participants’ brain activity across 32 regions of the brain. Of the three groups, the ChatGPT users had the lowest brain engagement.

Keep reading

Ukraine Drone ‘Mega Deal’ Possible With US, Zelensky Says

President Trump and Ukrainian President Volodymyr Zelensky are negotiating what’s being widely dubbed a potential “mega deal” which allows for Ukraine to exchange its growing small drone arsenal and technology for more advanced American weapons.

Reports say that under the scheme the United States would purchase Ukrainian-made drones, while Ukraine would in turn buy American weapons, according to a Zelensky interview with the New York Post. “The American people need this technology—it should be part of your defense arsenal,” Zelensky told the Post.

Ukraine was never much of a drone-producer, but the war with Russia has resulted in the country’s transformation into a small-drone manufacturing powerhouse, now producing millions of small, cheap drones – amid a growing UAV and aerial war which over the past months has seen hundreds exchanged between Russia and Ukraine on a nightly basis.

Apparently US defense planners were impressed by Ukrainian UAVs’ reach and effectiveness particularly during ‘Operation Spiderweb’ – which involved nearly 120 Ukrainian drones disabling or destroying multiple Russian bombers across four different airbases.

The Pentagon recently emphasized the urgent need to scale up drone production, with Defense Secretary Pete Hegseth also calling for more drone training across all branches of the US military. The defense budget for next year will also seek to ramp up America’s small drone warfare readiness.

As for the ‘mega deal’ being teased by Zelensky, it’s anything but certain the degree to which the White House will actually sign on to this. Trump’s emphasis of late has been for NATO’s European members to give up their US-made weapons and transfer them to Ukraine first.

It’s also clear that Ukraine doesn’t have much in the way of weapons or technology to offer Washington, and there are currently many military tech companies and defense contractors which are ramping up small drone production. The tech offered by American firms like Anduril Industries is also without parallel – given AI integration.

Keep reading

Meta refuses to sign EU’s AI code of practice

Meta has refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.

“Europe is heading down the wrong path on AI,” wrote Meta’s chief global affairs officer Joel Kaplan in a post on LinkedIn. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

The EU’s code of practice — a voluntary framework published earlier this month — aims to help companies implement processes and systems to comply with the bloc’s legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services and bans developers from training AI on pirated content; companies must also comply with content owners’ requests to not use their works in their datasets.

Calling the EU’s implementation of the legislation “overreach,” Kaplan claimed that the law will “throttle the development and deployment of frontier AI models in Europe and will stunt European companies looking to build businesses on top of them.”

risk-based regulation for applications of artificial intelligence, the AI Act bans some “unacceptable risk” use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of “high-risk” uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk- and quality-management obligations.

Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft, and Mistral AI, have been fighting the rules, even urging the European Commission to delay its rollout. But the Commission has held firm, saying it will not change its timeline.

Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of “general-purpose AI models with systemic risk,” like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by August 2, 2027.

Keep reading

Artificial Intelligence Breeds Mindless Inhumanity

I began studying AI in the mid-1980s. Unusually for a computer scientist of that era, my interest was entirely in information, not in machines. I became obsessed with understanding what it meant to live during the transition from the late Industrial Age to the early Information Age.  

What I learned is that computers fundamentally alter the economics of information. We now have inexpensive access to more information, and to higher quality information, than ever before. In theory, that should help individuals reach better decisions, organizations devise improved strategies, and governments craft superior policies. But that’s just a theory. Does it? 

The answer is “sometimes.” Unfortunately, the “sometimes not” part of the equation is now poised to unleash devastating consequences. 

Consider the altered economics of information: Scarcity creates value. That’s been true in all times, in all cultures, and for all resources. If there’s not enough of a resource to meet demand, its value increases. If demand is met and a surplus remains, value plummets.  

Historically, information was scarce. Spies, lawyers, doctors, priests, scientists, scholars, accountants, teachers, and others spent years acquiring knowledge, then commanded a premium for their services.  

Today, information is overabundant. No one need know anything because the trusty phones that never leave our sides can answer any question that might come our way. Why waste your time learning, studying, or internalizing information when you can just look it up on demand? 

Keep reading

House Committee Leaders Urge FBI To Halt Certifying Chinese Biometric Devices

The bipartisan leaders of a House committee are urging the FBI to halt the certification of biometric products manufactured by Chinese tech companies, citing risks to U.S. national security.

In a letter dated July 15 to FBI Director Kash Patel, Reps. John Moolenaar (R-Mich.) and Raja Krishnamoorthi (D-Ill.), chair and ranking member, respectively, of the House Select Committee on the Chinese Communist Party, said that biometric products from 32 Chinese companies are currently on the agency’s Certified Products List.

The FBI should “put an end to its ongoing certification of products from Chinese military-linked and surveillance companies … that could be used to spy on Americans, strengthen the repressive surveillance state of the People’s Republic of China (PRC), and otherwise threaten U.S. national security,” the lawmakers wrote.

Among the 32 companies, the lawmakers highlighted Hangzhou Hikvision Digital Technology, which was added to the Commerce Department’s Entity List in 2019 over its involvement in the Chinese Communist Party’s (CCP’s) human rights violations in China’s far-western Xinjiang region. In 2021, Hikvision was designated as a company linked to China’s military-industrial complex in an executive order.

Currently on the FBI list is Hikvision’s HK300 PIV “single finger capture device,” which was certified on Jan. 15.

“Including these products on the Certified Products List grants these companies the FBI’s seal of approval, which they can leverage to market their products as FBI-approved to customers in the U.S. government, elsewhere in the United States, and around the globe,” the letter reads.

“This sends a dangerous signal to potential buyers that these companies’ products are trustworthy and heightens the risk that these products will be procured by U.S. government entities or contractors despite the security risks.

“It also sends conflicting messages about U.S. policy toward companies with ties to the PRC’s military-industrial complex.”

Keep reading

Eight Healthy Babies Born via IVF using DNA from Three People

In the United Kingdom, medical professionals have successfully delivered eight babies using a pioneering fertility procedure that incorporates DNA from three individuals.

This method aims to safeguard children from inheriting severe mitochondrial disorders. The births represent a cautious advancement in assisted reproduction, prioritizing family health and stability.

The mothers involved carried mutations in their mitochondria, risking life-threatening conditions for their offspring. Mitochondria serve as cellular energy sources, essential for bodily functions. Without intervention, these defects could devastate future generations.

The United Kingdom amended its laws in 2015 to permit this technique, reflecting deliberate ethical review. In 2017, regulators issued the initial license to Newcastle University’s fertility clinic. This institution led the development over two decades.

Among the newborns are four boys and four boys, including identical twins, from seven women. All show no evidence of the anticipated mitochondrial ailments. One additional pregnancy continues under medical care.

Professor Doug Turnbull, a key researcher, described the results as reassuring for families and scientists alike. He highlighted the relief in achieving positive outcomes for patients.

Professor Mary Herbert, a senior team member, expressed fulfillment in seeing eight healthy infants. She noted the achievement rewards the extensive collaborative work.

Human genes primarily reside in the cell’s nucleus, totaling around 20,000. However, mitochondria add 37 genes of their own. Faulty mutations here can lead to profound cellular energy deficits.

Keep reading

Study finds AI tools made open source software developers 19 percent slower

When it comes to concrete use cases for large language models, AI companies love to point out the ways coders and software developers can use these models to increase their productivity and overall efficiency in creating computer code. However, a new randomized controlled trial has found that experienced open source coders became less efficient at coding-related tasks when they used current AI tools.

For their study, researchers at METR (Model Evaluation and Threat Research) recruited 16 software developers, each with multiple years of experience working on specific open source repositories. The study followed these developers across 246 individual “tasks” involved with maintaining those repos, such as “bug fixes, features, and refactors that would normally be part of their regular work.” For half of those tasks, the developers used AI tools like Cursor Pro or Anthropic’s Claude; for the others, the programmers were instructed not to use AI assistance. Expected time forecasts for each task (made before the groupings were assigned) were used as a proxy to balance out the overall difficulty of the tasks in each experimental group, and the time needed to fix pull requests based on reviewer feedback was included in the overall assessment.

Keep reading

Chinese Military Unveils Robot Murder Wolves, Drones That Definitely Violate Asimov’s Laws

The Chinese military conducted tactical exercises integrating robotic wolves and infantry-operated drones, underscoring Beijing’s push to deploy unmanned systems in modern combat operations, according to state-run media.

The 76th Group Army’s drills focused on battlefield coordination between personnel and autonomous technologies for reconnaissance, strategic point clearing, fire support and breaching defensive positions, according to a military statement. The exercises represent China’s latest effort to advance unmanned warfare capabilities amid growing global competition in military robotics.

The robotic wolves, branded “steel warriors,” debuted at the 2024 Airshow China exhibition before being deployed in joint exercises with Cambodia. During the 2024 “Golden Dragon” exercise, China fielded a rifle-armed robotic wolf for assault operations. The follow-up “Golden Dragon 2025” exercise featured a UAV equipped with a QBZ-95 assault rifle providing fire cover for ground units.

The military demonstrations come as Chinese defense analysts raise concerns about the ethical implications of autonomous weapons systems. In a recent People’s Liberation Army Daily op-ed, analysts Yuan Yi, Ma Ye and Yue Shiguang called for “ethical and legal research” to address risks from militarized robots, warning that malfunctioning units could cause “indiscriminate killings and accidental deaths.”

The PLA Daily authors referenced Isaac Asimov’s Three Laws of Robotics, which prohibit robots from harming humans, arguing that militarized humanoid robots “clearly violate” these principles. They proposed overhauling Asimov’s framework for military applications, emphasizing that combat robots must adhere to laws of war by “obeying humans,” “respecting humans” and “protecting humans.”

Keep reading

How to Stop Google’s Gemini AI from Accessing Your Apps

Google has initiated a broad Android update that enables its Gemini AI assistant to access third-party apps such as WhatsApp, even when users had previously set their devices to block such interactions.

This shift is taking place starting this week and appears to override individual privacy preferences, leaving many Android users without a clear path to opt-out.

According to ArsTechnica, in an email sent ahead of the rollout, Google linked to a support page that confirms “human reviewers (including service providers) read, annotate, and process” data accessed by Gemini.

The email fails to provide direct steps for those hoping to prevent the AI from integrating with their apps.

While it claims users can block specific app interactions, it also states that data will still be stored temporarily for 72 hours.

The message offers conflicting statements about user control. It says Gemini will begin integrating with services like WhatsApp, Phone, and Messages “whether your Gemini apps activity is on or off.”

Yet shortly after, it says, “If you have already turned these features off, they will remain off.” Nowhere in the email or the support pages does Google explain how users can fully remove Gemini from their devices or prevent it from being enabled in the background.

Keep in mind that WhatsApp is an end-to-end encrypted messaging and the app Gemini accesses it undermines that encryption.

Keep reading