Meta refuses to sign EU’s AI code of practice

Meta has refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.

“Europe is heading down the wrong path on AI,” wrote Meta’s chief global affairs officer Joel Kaplan in a post on LinkedIn. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

The EU’s code of practice — a voluntary framework published earlier this month — aims to help companies implement processes and systems to comply with the bloc’s legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services and bans developers from training AI on pirated content; companies must also comply with content owners’ requests to not use their works in their datasets.

Calling the EU’s implementation of the legislation “overreach,” Kaplan claimed that the law will “throttle the development and deployment of frontier AI models in Europe and will stunt European companies looking to build businesses on top of them.”

risk-based regulation for applications of artificial intelligence, the AI Act bans some “unacceptable risk” use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of “high-risk” uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk- and quality-management obligations.

Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft, and Mistral AI, have been fighting the rules, even urging the European Commission to delay its rollout. But the Commission has held firm, saying it will not change its timeline.

Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of “general-purpose AI models with systemic risk,” like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by August 2, 2027.

Keep reading

Artificial Intelligence Breeds Mindless Inhumanity

I began studying AI in the mid-1980s. Unusually for a computer scientist of that era, my interest was entirely in information, not in machines. I became obsessed with understanding what it meant to live during the transition from the late Industrial Age to the early Information Age.  

What I learned is that computers fundamentally alter the economics of information. We now have inexpensive access to more information, and to higher quality information, than ever before. In theory, that should help individuals reach better decisions, organizations devise improved strategies, and governments craft superior policies. But that’s just a theory. Does it? 

The answer is “sometimes.” Unfortunately, the “sometimes not” part of the equation is now poised to unleash devastating consequences. 

Consider the altered economics of information: Scarcity creates value. That’s been true in all times, in all cultures, and for all resources. If there’s not enough of a resource to meet demand, its value increases. If demand is met and a surplus remains, value plummets.  

Historically, information was scarce. Spies, lawyers, doctors, priests, scientists, scholars, accountants, teachers, and others spent years acquiring knowledge, then commanded a premium for their services.  

Today, information is overabundant. No one need know anything because the trusty phones that never leave our sides can answer any question that might come our way. Why waste your time learning, studying, or internalizing information when you can just look it up on demand? 

Keep reading

House Committee Leaders Urge FBI To Halt Certifying Chinese Biometric Devices

The bipartisan leaders of a House committee are urging the FBI to halt the certification of biometric products manufactured by Chinese tech companies, citing risks to U.S. national security.

In a letter dated July 15 to FBI Director Kash Patel, Reps. John Moolenaar (R-Mich.) and Raja Krishnamoorthi (D-Ill.), chair and ranking member, respectively, of the House Select Committee on the Chinese Communist Party, said that biometric products from 32 Chinese companies are currently on the agency’s Certified Products List.

The FBI should “put an end to its ongoing certification of products from Chinese military-linked and surveillance companies … that could be used to spy on Americans, strengthen the repressive surveillance state of the People’s Republic of China (PRC), and otherwise threaten U.S. national security,” the lawmakers wrote.

Among the 32 companies, the lawmakers highlighted Hangzhou Hikvision Digital Technology, which was added to the Commerce Department’s Entity List in 2019 over its involvement in the Chinese Communist Party’s (CCP’s) human rights violations in China’s far-western Xinjiang region. In 2021, Hikvision was designated as a company linked to China’s military-industrial complex in an executive order.

Currently on the FBI list is Hikvision’s HK300 PIV “single finger capture device,” which was certified on Jan. 15.

“Including these products on the Certified Products List grants these companies the FBI’s seal of approval, which they can leverage to market their products as FBI-approved to customers in the U.S. government, elsewhere in the United States, and around the globe,” the letter reads.

“This sends a dangerous signal to potential buyers that these companies’ products are trustworthy and heightens the risk that these products will be procured by U.S. government entities or contractors despite the security risks.

“It also sends conflicting messages about U.S. policy toward companies with ties to the PRC’s military-industrial complex.”

Keep reading

Eight Healthy Babies Born via IVF using DNA from Three People

In the United Kingdom, medical professionals have successfully delivered eight babies using a pioneering fertility procedure that incorporates DNA from three individuals.

This method aims to safeguard children from inheriting severe mitochondrial disorders. The births represent a cautious advancement in assisted reproduction, prioritizing family health and stability.

The mothers involved carried mutations in their mitochondria, risking life-threatening conditions for their offspring. Mitochondria serve as cellular energy sources, essential for bodily functions. Without intervention, these defects could devastate future generations.

The United Kingdom amended its laws in 2015 to permit this technique, reflecting deliberate ethical review. In 2017, regulators issued the initial license to Newcastle University’s fertility clinic. This institution led the development over two decades.

Among the newborns are four boys and four boys, including identical twins, from seven women. All show no evidence of the anticipated mitochondrial ailments. One additional pregnancy continues under medical care.

Professor Doug Turnbull, a key researcher, described the results as reassuring for families and scientists alike. He highlighted the relief in achieving positive outcomes for patients.

Professor Mary Herbert, a senior team member, expressed fulfillment in seeing eight healthy infants. She noted the achievement rewards the extensive collaborative work.

Human genes primarily reside in the cell’s nucleus, totaling around 20,000. However, mitochondria add 37 genes of their own. Faulty mutations here can lead to profound cellular energy deficits.

Keep reading

Study finds AI tools made open source software developers 19 percent slower

When it comes to concrete use cases for large language models, AI companies love to point out the ways coders and software developers can use these models to increase their productivity and overall efficiency in creating computer code. However, a new randomized controlled trial has found that experienced open source coders became less efficient at coding-related tasks when they used current AI tools.

For their study, researchers at METR (Model Evaluation and Threat Research) recruited 16 software developers, each with multiple years of experience working on specific open source repositories. The study followed these developers across 246 individual “tasks” involved with maintaining those repos, such as “bug fixes, features, and refactors that would normally be part of their regular work.” For half of those tasks, the developers used AI tools like Cursor Pro or Anthropic’s Claude; for the others, the programmers were instructed not to use AI assistance. Expected time forecasts for each task (made before the groupings were assigned) were used as a proxy to balance out the overall difficulty of the tasks in each experimental group, and the time needed to fix pull requests based on reviewer feedback was included in the overall assessment.

Keep reading

Chinese Military Unveils Robot Murder Wolves, Drones That Definitely Violate Asimov’s Laws

The Chinese military conducted tactical exercises integrating robotic wolves and infantry-operated drones, underscoring Beijing’s push to deploy unmanned systems in modern combat operations, according to state-run media.

The 76th Group Army’s drills focused on battlefield coordination between personnel and autonomous technologies for reconnaissance, strategic point clearing, fire support and breaching defensive positions, according to a military statement. The exercises represent China’s latest effort to advance unmanned warfare capabilities amid growing global competition in military robotics.

The robotic wolves, branded “steel warriors,” debuted at the 2024 Airshow China exhibition before being deployed in joint exercises with Cambodia. During the 2024 “Golden Dragon” exercise, China fielded a rifle-armed robotic wolf for assault operations. The follow-up “Golden Dragon 2025” exercise featured a UAV equipped with a QBZ-95 assault rifle providing fire cover for ground units.

The military demonstrations come as Chinese defense analysts raise concerns about the ethical implications of autonomous weapons systems. In a recent People’s Liberation Army Daily op-ed, analysts Yuan Yi, Ma Ye and Yue Shiguang called for “ethical and legal research” to address risks from militarized robots, warning that malfunctioning units could cause “indiscriminate killings and accidental deaths.”

The PLA Daily authors referenced Isaac Asimov’s Three Laws of Robotics, which prohibit robots from harming humans, arguing that militarized humanoid robots “clearly violate” these principles. They proposed overhauling Asimov’s framework for military applications, emphasizing that combat robots must adhere to laws of war by “obeying humans,” “respecting humans” and “protecting humans.”

Keep reading

How to Stop Google’s Gemini AI from Accessing Your Apps

Google has initiated a broad Android update that enables its Gemini AI assistant to access third-party apps such as WhatsApp, even when users had previously set their devices to block such interactions.

This shift is taking place starting this week and appears to override individual privacy preferences, leaving many Android users without a clear path to opt-out.

According to ArsTechnica, in an email sent ahead of the rollout, Google linked to a support page that confirms “human reviewers (including service providers) read, annotate, and process” data accessed by Gemini.

The email fails to provide direct steps for those hoping to prevent the AI from integrating with their apps.

While it claims users can block specific app interactions, it also states that data will still be stored temporarily for 72 hours.

The message offers conflicting statements about user control. It says Gemini will begin integrating with services like WhatsApp, Phone, and Messages “whether your Gemini apps activity is on or off.”

Yet shortly after, it says, “If you have already turned these features off, they will remain off.” Nowhere in the email or the support pages does Google explain how users can fully remove Gemini from their devices or prevent it from being enabled in the background.

Keep in mind that WhatsApp is an end-to-end encrypted messaging and the app Gemini accesses it undermines that encryption.

Keep reading

OpenAI Unveils ‘Agent’ Feature That Can Control Your Computer and Perform Tasks Automatically

OpenAI is preparing to launch a new feature that may mark the latest leap forward in the artificial intelligence revolution.

On Thursday, the company unveiled ChatGPT Agent, a tool designed to carry out tasks independently using its own built-in “virtual computer.”

The Verge has more details:

In a briefing and demo with The Verge, Yash Kumar and Isa Fulford — product lead and research lead on ChatGPT Agent, respectively — said it’s powered by a new model that OpenAI developed specifically for the product.

The company said the new tool can perform tasks like looking at a user’s calendar to brief them on upcoming client meetings, planning and purchasing ingredients to make a family breakfast, and creating a slide deck based on its analysis of competing companies.

The model behind ChatGPT Agent, which has no specific name, was trained on complex tasks that require multiple tools — like a text browser, visual browser, and terminal where users can import their own data — via reinforcement learning, the same technique used for all of OpenAI’s reasoning models. OpenAI said that ChatGPT Agent combines the capabilities of both Operator and Deep Research, two of its existing AI tools.

To develop the new tool, the company combined the teams behind both Operator and Deep Research into one unified team. Kumar and Fulford told The Verge that the new team is made up of between 20 and 35 people across product and research.

Despite stiff competition from the likes of Google’s Gemini, Elon Musk’s Grok and China’s DeepSeek, OpenAI remains the most popular and advanced large language model currently on the market.

Keep reading

Israel and US to forge $200m tech hub for AI and quantum science development

Israel and the US are advancing a strategic initiative to create a joint science center for artificial intelligence and quantum innovation at an investment of $200 million. The center will serve as a hub to promote technology-driven cooperation and diplomacy with Gulf countries in the realms of AI and quantum science and challenge China in the global race for supremacy of next-generation technologies.

The initiative led by Maj. Gen. (res.) Tamir Hayman, the director of Israel’s Institute for National Security Studies (INSS), and Dr. Smadar Itzkovich, founder and CEO of AI & Quantum Sovereignty Lab (AIQ-Lab), is expected to be implemented either through a presidential executive order signed by US President Donald Trump or a legislative process.

“This is a strategic initiative that aims to reshape the Middle East through US-Israel scientific and technological collaboration in AI and quantum,” Itzkovich told The Times of Israel. “Israel is a powerhouse for physics and quantum technology, and by using our advantage, we can translate it to unbelievable achievements for economic growth and prosperity and for stability and security to create regional sovereignty in the areas of AI and quantum science.”

As part of the proposed initiative for the science center, each nation will contribute $20 million annually, starting in 2026 and through 2030, to support research and development projects at dual headquarters in Tel Aviv and Arlington, Virginia. The technology collaboration will focus on shared, urgent regional challenges, including cybersecurity, medicine and genetics, and water and food security in arid environments.

The initiative comes at a pivotal point, as concern has been growing that Israel could be missing out on a regional boom of investments into the next wave of technologies. In May, Trump and United Arab Emirates President Mohamed bin Zayed Al Nahyan announced the joint launch of the largest AI campus outside the US. Meanwhile, Saudi Arabia aims to become a global center for AI and reportedly has plans to create a $40 billion fund to invest in AI.

Keep reading

Disturbing Signs of AI Threatening People Spark Concern

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behavior appears linked to the emergence of “reasoning” models – AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

“O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate “alignment” – appearing to follow instructions while secretly pursuing different objectives.

Keep reading