Meta refuses to sign EU’s AI code of practice

Meta has refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.

“Europe is heading down the wrong path on AI,” wrote Meta’s chief global affairs officer Joel Kaplan in a post on LinkedIn. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

The EU’s code of practice — a voluntary framework published earlier this month — aims to help companies implement processes and systems to comply with the bloc’s legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services and bans developers from training AI on pirated content; companies must also comply with content owners’ requests to not use their works in their datasets.

Calling the EU’s implementation of the legislation “overreach,” Kaplan claimed that the law will “throttle the development and deployment of frontier AI models in Europe and will stunt European companies looking to build businesses on top of them.”

risk-based regulation for applications of artificial intelligence, the AI Act bans some “unacceptable risk” use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of “high-risk” uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk- and quality-management obligations.

Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft, and Mistral AI, have been fighting the rules, even urging the European Commission to delay its rollout. But the Commission has held firm, saying it will not change its timeline.

Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of “general-purpose AI models with systemic risk,” like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by August 2, 2027.

Keep reading

Artificial Intelligence Breeds Mindless Inhumanity

I began studying AI in the mid-1980s. Unusually for a computer scientist of that era, my interest was entirely in information, not in machines. I became obsessed with understanding what it meant to live during the transition from the late Industrial Age to the early Information Age.  

What I learned is that computers fundamentally alter the economics of information. We now have inexpensive access to more information, and to higher quality information, than ever before. In theory, that should help individuals reach better decisions, organizations devise improved strategies, and governments craft superior policies. But that’s just a theory. Does it? 

The answer is “sometimes.” Unfortunately, the “sometimes not” part of the equation is now poised to unleash devastating consequences. 

Consider the altered economics of information: Scarcity creates value. That’s been true in all times, in all cultures, and for all resources. If there’s not enough of a resource to meet demand, its value increases. If demand is met and a surplus remains, value plummets.  

Historically, information was scarce. Spies, lawyers, doctors, priests, scientists, scholars, accountants, teachers, and others spent years acquiring knowledge, then commanded a premium for their services.  

Today, information is overabundant. No one need know anything because the trusty phones that never leave our sides can answer any question that might come our way. Why waste your time learning, studying, or internalizing information when you can just look it up on demand? 

Keep reading

Study finds AI tools made open source software developers 19 percent slower

When it comes to concrete use cases for large language models, AI companies love to point out the ways coders and software developers can use these models to increase their productivity and overall efficiency in creating computer code. However, a new randomized controlled trial has found that experienced open source coders became less efficient at coding-related tasks when they used current AI tools.

For their study, researchers at METR (Model Evaluation and Threat Research) recruited 16 software developers, each with multiple years of experience working on specific open source repositories. The study followed these developers across 246 individual “tasks” involved with maintaining those repos, such as “bug fixes, features, and refactors that would normally be part of their regular work.” For half of those tasks, the developers used AI tools like Cursor Pro or Anthropic’s Claude; for the others, the programmers were instructed not to use AI assistance. Expected time forecasts for each task (made before the groupings were assigned) were used as a proxy to balance out the overall difficulty of the tasks in each experimental group, and the time needed to fix pull requests based on reviewer feedback was included in the overall assessment.

Keep reading

How to Stop Google’s Gemini AI from Accessing Your Apps

Google has initiated a broad Android update that enables its Gemini AI assistant to access third-party apps such as WhatsApp, even when users had previously set their devices to block such interactions.

This shift is taking place starting this week and appears to override individual privacy preferences, leaving many Android users without a clear path to opt-out.

According to ArsTechnica, in an email sent ahead of the rollout, Google linked to a support page that confirms “human reviewers (including service providers) read, annotate, and process” data accessed by Gemini.

The email fails to provide direct steps for those hoping to prevent the AI from integrating with their apps.

While it claims users can block specific app interactions, it also states that data will still be stored temporarily for 72 hours.

The message offers conflicting statements about user control. It says Gemini will begin integrating with services like WhatsApp, Phone, and Messages “whether your Gemini apps activity is on or off.”

Yet shortly after, it says, “If you have already turned these features off, they will remain off.” Nowhere in the email or the support pages does Google explain how users can fully remove Gemini from their devices or prevent it from being enabled in the background.

Keep in mind that WhatsApp is an end-to-end encrypted messaging and the app Gemini accesses it undermines that encryption.

Keep reading

OpenAI Unveils ‘Agent’ Feature That Can Control Your Computer and Perform Tasks Automatically

OpenAI is preparing to launch a new feature that may mark the latest leap forward in the artificial intelligence revolution.

On Thursday, the company unveiled ChatGPT Agent, a tool designed to carry out tasks independently using its own built-in “virtual computer.”

The Verge has more details:

In a briefing and demo with The Verge, Yash Kumar and Isa Fulford — product lead and research lead on ChatGPT Agent, respectively — said it’s powered by a new model that OpenAI developed specifically for the product.

The company said the new tool can perform tasks like looking at a user’s calendar to brief them on upcoming client meetings, planning and purchasing ingredients to make a family breakfast, and creating a slide deck based on its analysis of competing companies.

The model behind ChatGPT Agent, which has no specific name, was trained on complex tasks that require multiple tools — like a text browser, visual browser, and terminal where users can import their own data — via reinforcement learning, the same technique used for all of OpenAI’s reasoning models. OpenAI said that ChatGPT Agent combines the capabilities of both Operator and Deep Research, two of its existing AI tools.

To develop the new tool, the company combined the teams behind both Operator and Deep Research into one unified team. Kumar and Fulford told The Verge that the new team is made up of between 20 and 35 people across product and research.

Despite stiff competition from the likes of Google’s Gemini, Elon Musk’s Grok and China’s DeepSeek, OpenAI remains the most popular and advanced large language model currently on the market.

Keep reading

Israel and US to forge $200m tech hub for AI and quantum science development

Israel and the US are advancing a strategic initiative to create a joint science center for artificial intelligence and quantum innovation at an investment of $200 million. The center will serve as a hub to promote technology-driven cooperation and diplomacy with Gulf countries in the realms of AI and quantum science and challenge China in the global race for supremacy of next-generation technologies.

The initiative led by Maj. Gen. (res.) Tamir Hayman, the director of Israel’s Institute for National Security Studies (INSS), and Dr. Smadar Itzkovich, founder and CEO of AI & Quantum Sovereignty Lab (AIQ-Lab), is expected to be implemented either through a presidential executive order signed by US President Donald Trump or a legislative process.

“This is a strategic initiative that aims to reshape the Middle East through US-Israel scientific and technological collaboration in AI and quantum,” Itzkovich told The Times of Israel. “Israel is a powerhouse for physics and quantum technology, and by using our advantage, we can translate it to unbelievable achievements for economic growth and prosperity and for stability and security to create regional sovereignty in the areas of AI and quantum science.”

As part of the proposed initiative for the science center, each nation will contribute $20 million annually, starting in 2026 and through 2030, to support research and development projects at dual headquarters in Tel Aviv and Arlington, Virginia. The technology collaboration will focus on shared, urgent regional challenges, including cybersecurity, medicine and genetics, and water and food security in arid environments.

The initiative comes at a pivotal point, as concern has been growing that Israel could be missing out on a regional boom of investments into the next wave of technologies. In May, Trump and United Arab Emirates President Mohamed bin Zayed Al Nahyan announced the joint launch of the largest AI campus outside the US. Meanwhile, Saudi Arabia aims to become a global center for AI and reportedly has plans to create a $40 billion fund to invest in AI.

Keep reading

Disturbing Signs of AI Threatening People Spark Concern

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behavior appears linked to the emergence of “reasoning” models – AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

“O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate “alignment” – appearing to follow instructions while secretly pursuing different objectives.

Keep reading

Pentagon Awards Contracts To 4 Artificial Intelligence Developers

The U.S. Department of Defense announced on July 14 that it has awarded contracts to four U.S.-based artificial intelligence (AI) developers to address national security challenges.

Anthropic, Google, OpenAI, and xAI will each receive a contracting award with a ceiling of $200 million, according to a statement shared by the Chief Digital and Artificial Intelligence Office.

The office said these four companies would help “develop agentic AI workflows across a variety of mission areas.”

“Agentic AI” refers to systems designed to operate with minimal human input.

Formed in 2021, the Chief Digital and Artificial Intelligence Office is responsible for speeding up the military’s adoption of AI systems.

OpenAI was the first of the four contract awardees to announce its contract with the Chief Digital and Artificial Intelligence Office.

In June, the company unveiled “OpenAI for Government” and said its first partnership under the new initiative would help the Pentagon office identify and prototype new AI tools for administrative operations.

Anthropic has developed the Claude family of AI chatbots.

In June, Anthropic announced the development of custom “Claude Gov” models intended for national security clients.

The company said agencies operating at the highest level of the U.S. national security sector are already using these AI models.

Formed by billionaire entrepreneur Elon Musk in 2023, xAI serves as a parent to X Corp., which operates the social media platform X. Among its services, xAI has developed the Grok AI chatbot.

On July 14, xAI announced “Grok for Government” and confirmed that the service holds contracts with the Department of Defense and the U.S. General Services Administration.

Google Public Sector Vice President Jim Kelly said in a July 14 blog post that the new AI announcement with the Department of Defense would build on a long-standing partnership between Google and the U.S. military.

Kelly said his company would give the military access to its Cloud Tensor Processing Units, which power Google’s current AI applications.

Keep reading

Who’s Programming the AI, Mr. President?

President Trump’s new legislative centerpiece—the One Big Beautiful Bill—has a promising name and some compelling goals: reduce government bloat, streamline agencies, and modernize operations with cutting-edge technology.

But there’s a problem. A big one.

No one will tell us who’s programming the AI.

This sweeping bill includes a 10-year moratorium on any state or local government regulating artificial intelligence. According to The Washington Post and AP, more than 60 existing state-level laws will be overridden if this provision passes. All regulatory authority over AI—including systems that will be used in law enforcement, healthcare, defense, and finance—will be centralized in the federal government for a decade.

Even worse? The bill empowers the Department of Commerce to deploy “commercial AI” across virtually every federal agency—from the IRS to Homeland Security—according to Indian Express and The Verge.

And yet, no one in the White House or Congress has revealed who is writing the AI code, what datasets it’s trained on, whether it can be independently audited, or whether it’s bound by the U.S. Constitution.

This isn’t just a transparency issue. This is a constitutional crisis in the making.

To be clear, President Trump’s instincts here may be sound. We’ve long needed to shrink the federal leviathan and replace unconstitutional bureaucracies with systems that serve the people—not special interests.

But good intentions won’t protect us from unseen programmers, black-box algorithms, and unaccountable automation.

This bill mandates AI integration across government “to improve efficiency and security.” But efficiency isn’t liberty. Security isn’t sovereignty. And no AI—no matter how “smart”—should be allowed to rewrite, ignore, or reinterpret constitutional rights.

According to Business Insider, the AI moratorium’s stated goal is to “foster innovation” and avoid a “fragmented regulatory landscape.” In reality, it strips states of their authority to protect their citizens from deepfakes, algorithmic bias, digital censorship, and mass surveillance.

This is not governance. This is outsourced tyranny, hidden under the guise of modernization.

Keep reading

Trump heads to Pittsburgh to announce $70 BILLION AI, energy investment with Sen Dave McCormick

President Donald Trump andSen. Dave McCormick (R-PA) will team up in Pittsburgh on Tuesday to announce a $70 billion investment in AI and energy for the state. The push will involve thousands of new jobs for Pennsylvanians.

The annoucement planned for Tuesday will come during the inaugural Pennsylvania Energy and Innovation Summit in Pittsburgh, and will “bring together the world’s top leaders in energy and AI, alongside the biggest global investors, labor and trades leaders, and government officials, to showcase Pennsylvania’s incredible potential to power the AI revolution,” a press release from McCormick’s office stated.

McCormick will be joined by Trump, leaders in energy, and others at Carnegie Mellon University for the announcement. The $70 billion is the “largest investment commitment in these industries in terms of dollars for the state and jobs created in the history of the Commonwealth,” McCormick said, per Axios.

“Anticipated investments include new data centers, new power generation and grid infrastructure to meet surging data center demand, along with AI training programs and apprenticeships for businesses,” a preview of McCormick’s speech adds.

COO of Blackstone, an alternate asset manager, is expected to announce a $25 billion investment into the infrastructure needed for AI, including energy and data centers. It will be expected to spur 6,000 annual jobs in construction and 3,000 permanent jobs.

Keep reading