Study finds AI tools made open source software developers 19 percent slower

When it comes to concrete use cases for large language models, AI companies love to point out the ways coders and software developers can use these models to increase their productivity and overall efficiency in creating computer code. However, a new randomized controlled trial has found that experienced open source coders became less efficient at coding-related tasks when they used current AI tools.

For their study, researchers at METR (Model Evaluation and Threat Research) recruited 16 software developers, each with multiple years of experience working on specific open source repositories. The study followed these developers across 246 individual “tasks” involved with maintaining those repos, such as “bug fixes, features, and refactors that would normally be part of their regular work.” For half of those tasks, the developers used AI tools like Cursor Pro or Anthropic’s Claude; for the others, the programmers were instructed not to use AI assistance. Expected time forecasts for each task (made before the groupings were assigned) were used as a proxy to balance out the overall difficulty of the tasks in each experimental group, and the time needed to fix pull requests based on reviewer feedback was included in the overall assessment.

Keep reading

How to Stop Google’s Gemini AI from Accessing Your Apps

Google has initiated a broad Android update that enables its Gemini AI assistant to access third-party apps such as WhatsApp, even when users had previously set their devices to block such interactions.

This shift is taking place starting this week and appears to override individual privacy preferences, leaving many Android users without a clear path to opt-out.

According to ArsTechnica, in an email sent ahead of the rollout, Google linked to a support page that confirms “human reviewers (including service providers) read, annotate, and process” data accessed by Gemini.

The email fails to provide direct steps for those hoping to prevent the AI from integrating with their apps.

While it claims users can block specific app interactions, it also states that data will still be stored temporarily for 72 hours.

The message offers conflicting statements about user control. It says Gemini will begin integrating with services like WhatsApp, Phone, and Messages “whether your Gemini apps activity is on or off.”

Yet shortly after, it says, “If you have already turned these features off, they will remain off.” Nowhere in the email or the support pages does Google explain how users can fully remove Gemini from their devices or prevent it from being enabled in the background.

Keep in mind that WhatsApp is an end-to-end encrypted messaging and the app Gemini accesses it undermines that encryption.

Keep reading

OpenAI Unveils ‘Agent’ Feature That Can Control Your Computer and Perform Tasks Automatically

OpenAI is preparing to launch a new feature that may mark the latest leap forward in the artificial intelligence revolution.

On Thursday, the company unveiled ChatGPT Agent, a tool designed to carry out tasks independently using its own built-in “virtual computer.”

The Verge has more details:

In a briefing and demo with The Verge, Yash Kumar and Isa Fulford — product lead and research lead on ChatGPT Agent, respectively — said it’s powered by a new model that OpenAI developed specifically for the product.

The company said the new tool can perform tasks like looking at a user’s calendar to brief them on upcoming client meetings, planning and purchasing ingredients to make a family breakfast, and creating a slide deck based on its analysis of competing companies.

The model behind ChatGPT Agent, which has no specific name, was trained on complex tasks that require multiple tools — like a text browser, visual browser, and terminal where users can import their own data — via reinforcement learning, the same technique used for all of OpenAI’s reasoning models. OpenAI said that ChatGPT Agent combines the capabilities of both Operator and Deep Research, two of its existing AI tools.

To develop the new tool, the company combined the teams behind both Operator and Deep Research into one unified team. Kumar and Fulford told The Verge that the new team is made up of between 20 and 35 people across product and research.

Despite stiff competition from the likes of Google’s Gemini, Elon Musk’s Grok and China’s DeepSeek, OpenAI remains the most popular and advanced large language model currently on the market.

Keep reading

Israel and US to forge $200m tech hub for AI and quantum science development

Israel and the US are advancing a strategic initiative to create a joint science center for artificial intelligence and quantum innovation at an investment of $200 million. The center will serve as a hub to promote technology-driven cooperation and diplomacy with Gulf countries in the realms of AI and quantum science and challenge China in the global race for supremacy of next-generation technologies.

The initiative led by Maj. Gen. (res.) Tamir Hayman, the director of Israel’s Institute for National Security Studies (INSS), and Dr. Smadar Itzkovich, founder and CEO of AI & Quantum Sovereignty Lab (AIQ-Lab), is expected to be implemented either through a presidential executive order signed by US President Donald Trump or a legislative process.

“This is a strategic initiative that aims to reshape the Middle East through US-Israel scientific and technological collaboration in AI and quantum,” Itzkovich told The Times of Israel. “Israel is a powerhouse for physics and quantum technology, and by using our advantage, we can translate it to unbelievable achievements for economic growth and prosperity and for stability and security to create regional sovereignty in the areas of AI and quantum science.”

As part of the proposed initiative for the science center, each nation will contribute $20 million annually, starting in 2026 and through 2030, to support research and development projects at dual headquarters in Tel Aviv and Arlington, Virginia. The technology collaboration will focus on shared, urgent regional challenges, including cybersecurity, medicine and genetics, and water and food security in arid environments.

The initiative comes at a pivotal point, as concern has been growing that Israel could be missing out on a regional boom of investments into the next wave of technologies. In May, Trump and United Arab Emirates President Mohamed bin Zayed Al Nahyan announced the joint launch of the largest AI campus outside the US. Meanwhile, Saudi Arabia aims to become a global center for AI and reportedly has plans to create a $40 billion fund to invest in AI.

Keep reading

Disturbing Signs of AI Threatening People Spark Concern

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behavior appears linked to the emergence of “reasoning” models – AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

“O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate “alignment” – appearing to follow instructions while secretly pursuing different objectives.

Keep reading

Pentagon Awards Contracts To 4 Artificial Intelligence Developers

The U.S. Department of Defense announced on July 14 that it has awarded contracts to four U.S.-based artificial intelligence (AI) developers to address national security challenges.

Anthropic, Google, OpenAI, and xAI will each receive a contracting award with a ceiling of $200 million, according to a statement shared by the Chief Digital and Artificial Intelligence Office.

The office said these four companies would help “develop agentic AI workflows across a variety of mission areas.”

“Agentic AI” refers to systems designed to operate with minimal human input.

Formed in 2021, the Chief Digital and Artificial Intelligence Office is responsible for speeding up the military’s adoption of AI systems.

OpenAI was the first of the four contract awardees to announce its contract with the Chief Digital and Artificial Intelligence Office.

In June, the company unveiled “OpenAI for Government” and said its first partnership under the new initiative would help the Pentagon office identify and prototype new AI tools for administrative operations.

Anthropic has developed the Claude family of AI chatbots.

In June, Anthropic announced the development of custom “Claude Gov” models intended for national security clients.

The company said agencies operating at the highest level of the U.S. national security sector are already using these AI models.

Formed by billionaire entrepreneur Elon Musk in 2023, xAI serves as a parent to X Corp., which operates the social media platform X. Among its services, xAI has developed the Grok AI chatbot.

On July 14, xAI announced “Grok for Government” and confirmed that the service holds contracts with the Department of Defense and the U.S. General Services Administration.

Google Public Sector Vice President Jim Kelly said in a July 14 blog post that the new AI announcement with the Department of Defense would build on a long-standing partnership between Google and the U.S. military.

Kelly said his company would give the military access to its Cloud Tensor Processing Units, which power Google’s current AI applications.

Keep reading

Who’s Programming the AI, Mr. President?

President Trump’s new legislative centerpiece—the One Big Beautiful Bill—has a promising name and some compelling goals: reduce government bloat, streamline agencies, and modernize operations with cutting-edge technology.

But there’s a problem. A big one.

No one will tell us who’s programming the AI.

This sweeping bill includes a 10-year moratorium on any state or local government regulating artificial intelligence. According to The Washington Post and AP, more than 60 existing state-level laws will be overridden if this provision passes. All regulatory authority over AI—including systems that will be used in law enforcement, healthcare, defense, and finance—will be centralized in the federal government for a decade.

Even worse? The bill empowers the Department of Commerce to deploy “commercial AI” across virtually every federal agency—from the IRS to Homeland Security—according to Indian Express and The Verge.

And yet, no one in the White House or Congress has revealed who is writing the AI code, what datasets it’s trained on, whether it can be independently audited, or whether it’s bound by the U.S. Constitution.

This isn’t just a transparency issue. This is a constitutional crisis in the making.

To be clear, President Trump’s instincts here may be sound. We’ve long needed to shrink the federal leviathan and replace unconstitutional bureaucracies with systems that serve the people—not special interests.

But good intentions won’t protect us from unseen programmers, black-box algorithms, and unaccountable automation.

This bill mandates AI integration across government “to improve efficiency and security.” But efficiency isn’t liberty. Security isn’t sovereignty. And no AI—no matter how “smart”—should be allowed to rewrite, ignore, or reinterpret constitutional rights.

According to Business Insider, the AI moratorium’s stated goal is to “foster innovation” and avoid a “fragmented regulatory landscape.” In reality, it strips states of their authority to protect their citizens from deepfakes, algorithmic bias, digital censorship, and mass surveillance.

This is not governance. This is outsourced tyranny, hidden under the guise of modernization.

Keep reading

Trump heads to Pittsburgh to announce $70 BILLION AI, energy investment with Sen Dave McCormick

President Donald Trump andSen. Dave McCormick (R-PA) will team up in Pittsburgh on Tuesday to announce a $70 billion investment in AI and energy for the state. The push will involve thousands of new jobs for Pennsylvanians.

The annoucement planned for Tuesday will come during the inaugural Pennsylvania Energy and Innovation Summit in Pittsburgh, and will “bring together the world’s top leaders in energy and AI, alongside the biggest global investors, labor and trades leaders, and government officials, to showcase Pennsylvania’s incredible potential to power the AI revolution,” a press release from McCormick’s office stated.

McCormick will be joined by Trump, leaders in energy, and others at Carnegie Mellon University for the announcement. The $70 billion is the “largest investment commitment in these industries in terms of dollars for the state and jobs created in the history of the Commonwealth,” McCormick said, per Axios.

“Anticipated investments include new data centers, new power generation and grid infrastructure to meet surging data center demand, along with AI training programs and apprenticeships for businesses,” a preview of McCormick’s speech adds.

COO of Blackstone, an alternate asset manager, is expected to announce a $25 billion investment into the infrastructure needed for AI, including energy and data centers. It will be expected to spur 6,000 annual jobs in construction and 3,000 permanent jobs.

Keep reading

The Pentagon’s about to start using xAI’s Grok — and other federal agencies could be next

Elon Musk’s xAI is launching a new government-facing service. Its first client happens to be the largest employer on Earth.

The Department of Defense will pay up to $200 million for “Grok for Government,” a new collection of AI products geared toward use by federal, local, and state governments.

The department has also awarded similar contracts to Anthropic, Google, and OpenAI, which launched its own government-facing initiative last month.

“The adoption of AI is transforming the Department’s ability to support our warfighters and maintain strategic advantage over our adversaries,” Doug Matty, Chief Digital and AI Officer of the Department of Defense, said in a statement.

xAI said its government-facing products would include models designed specifically for national security purposes and eventually for use in classified environments.

The company also said those products would be available for purchase via the General Services Administration, opening the door for other federal agencies to use them.

The announcement comes less than a week after Grok went on an antisemitic rant on X. The company later apologized for the chatbot’s “horrific behavior,” though workers at the company erupted in anger internally over the incident.

Keep reading

Trump’s BBB busts the budget to benefit arms makers, AI warlords

Trump’s bill slashes spending on veterans to boost corporate welfare for the Big Five arms manufacturers and surging AI spying firms like Palantir.

Originally published at Antiwar.com.

The Senate is on the verge of passing the distinctly misnamed “big beautiful bill.” It is, in fact, one of the ugliest pieces of legislation to come out of Congress in living memory. The version that passed the House recently would cut $1.7 trillion, mostly in domestic spending, while providing the top 5% of taxpayers with roughly $1.5 trillion in tax breaks.

Over the next few years, the same bill will add another $150 billion to a Pentagon budget already soaring towards a record $1 trillion. In short, as of now, in the battle between welfare and warfare, the militarists are carrying the day.

Pentagon Pork and the People It Harms

The bill allocates tens of billions of dollars to pursue President Trump’s cherished but hopeless Golden Dome project, which Laura Grego of the Union of Concerned Scientists has described as “a fantasy.” She explained exactly why the Golden Dome, which would supposedly protect the United States against nuclear attack, is a pipe dream:

“Over the last 60 years, the United States has spent more than $350 billion on efforts to develop a defense against nuclear-armed ICBMs [intercontinental ballistic missiles]. This effort has been plagued by false starts and failures, and none have yet been demonstrated to be effective against a real-world threat… Missile defenses are not a useful or long-term strategy for keeping the U.S. safe from nuclear weapons.”

The bill also includes billions more for shipbuilding, heavy new investments in artillery and ammunition, and funding for next-generation combat aircraft like the F-47.

Oh, and after all of those weapons programs get their staggering cut of that future Pentagon budget, somewhere way down at the bottom of that list is a line item for improving the quality of life for active-duty military personnel. But the share aimed at the well-being of soldiers, sailors, and airmen (and women) is less than 6% of the $150 billion that Congress is now poised to add to that department’s already humongous budget. And that’s true despite the way Pentagon budget hawks invariably claim that the enormous sums they routinely plan on shoveling into it — and the overflowing coffers of the contractors it funds — are “for the troops.”

Much of the funding in the bill will flow into the districts of key members of Congress (to their considerable political benefit). For example, the Golden Dome project will send billions of dollars to companies based in Huntsville, Alabama, which calls itself “Rocket City” because of the dense network of outfits there working on both offensive missiles and missile defense systems. And that, of course, is music to the ears of Representative Mike Rogers (R-AL), the current chair of the House Armed Services Committee, who just happens to come from Alabama.

The shipbuilding funds will help prop up arms makers like HII Corporation (formerly Huntington Ingalls), which runs a shipyard in Pascagoula, Mississippi, the home state of Senate Armed Services Committee chair Roger Wicker (R-Miss).  The funds will also find their way to shipyards in MaineConnecticut, and Virginia.

Those funds will benefit the co-chairs of the House Shipbuilding Caucus, Representative Joe Courtney (D-CT) and Representative Rob Wittman (R-VA). Connecticut hosts General Dynamics’ Electric Boat plant, which makes submarines that carry ballistic missiles, while Virginia is home to HII Corporation’s Newport News Shipbuilding facility, which makes both aircraft carriers and attack submarines.

Keep reading