OpenAI Unveils ‘Agent’ Feature That Can Control Your Computer and Perform Tasks Automatically

OpenAI is preparing to launch a new feature that may mark the latest leap forward in the artificial intelligence revolution.

On Thursday, the company unveiled ChatGPT Agent, a tool designed to carry out tasks independently using its own built-in “virtual computer.”

The Verge has more details:

In a briefing and demo with The Verge, Yash Kumar and Isa Fulford — product lead and research lead on ChatGPT Agent, respectively — said it’s powered by a new model that OpenAI developed specifically for the product.

The company said the new tool can perform tasks like looking at a user’s calendar to brief them on upcoming client meetings, planning and purchasing ingredients to make a family breakfast, and creating a slide deck based on its analysis of competing companies.

The model behind ChatGPT Agent, which has no specific name, was trained on complex tasks that require multiple tools — like a text browser, visual browser, and terminal where users can import their own data — via reinforcement learning, the same technique used for all of OpenAI’s reasoning models. OpenAI said that ChatGPT Agent combines the capabilities of both Operator and Deep Research, two of its existing AI tools.

To develop the new tool, the company combined the teams behind both Operator and Deep Research into one unified team. Kumar and Fulford told The Verge that the new team is made up of between 20 and 35 people across product and research.

Despite stiff competition from the likes of Google’s Gemini, Elon Musk’s Grok and China’s DeepSeek, OpenAI remains the most popular and advanced large language model currently on the market.

Keep reading

Israel and US to forge $200m tech hub for AI and quantum science development

Israel and the US are advancing a strategic initiative to create a joint science center for artificial intelligence and quantum innovation at an investment of $200 million. The center will serve as a hub to promote technology-driven cooperation and diplomacy with Gulf countries in the realms of AI and quantum science and challenge China in the global race for supremacy of next-generation technologies.

The initiative led by Maj. Gen. (res.) Tamir Hayman, the director of Israel’s Institute for National Security Studies (INSS), and Dr. Smadar Itzkovich, founder and CEO of AI & Quantum Sovereignty Lab (AIQ-Lab), is expected to be implemented either through a presidential executive order signed by US President Donald Trump or a legislative process.

“This is a strategic initiative that aims to reshape the Middle East through US-Israel scientific and technological collaboration in AI and quantum,” Itzkovich told The Times of Israel. “Israel is a powerhouse for physics and quantum technology, and by using our advantage, we can translate it to unbelievable achievements for economic growth and prosperity and for stability and security to create regional sovereignty in the areas of AI and quantum science.”

As part of the proposed initiative for the science center, each nation will contribute $20 million annually, starting in 2026 and through 2030, to support research and development projects at dual headquarters in Tel Aviv and Arlington, Virginia. The technology collaboration will focus on shared, urgent regional challenges, including cybersecurity, medicine and genetics, and water and food security in arid environments.

The initiative comes at a pivotal point, as concern has been growing that Israel could be missing out on a regional boom of investments into the next wave of technologies. In May, Trump and United Arab Emirates President Mohamed bin Zayed Al Nahyan announced the joint launch of the largest AI campus outside the US. Meanwhile, Saudi Arabia aims to become a global center for AI and reportedly has plans to create a $40 billion fund to invest in AI.

Keep reading

Disturbing Signs of AI Threatening People Spark Concern

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behavior appears linked to the emergence of “reasoning” models – AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

“O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate “alignment” – appearing to follow instructions while secretly pursuing different objectives.

Keep reading

Trial begins as Meta investors, Zuckerberg square off over alleged privacy violations

An $8 billion trial by Meta Platforms (META.O) shareholders against Mark Zuckerberg and other current and former company leaders kicked off on Wednesday over claims they illegally harvested the data of Facebook users in violation of a 2012 agreement with the U.S. Federal Trade Commission.

The trial started with a privacy expert for the plaintiffs, Neil Richards of Washington University Law School, who testified about Facebook’s data policies.

“Facebook’s privacy disclosures were misleading,” he told the court.

Jeffrey Zients, White House chief of staff under President Joe Biden and a Meta (META.O) director for two years starting in May 2018, is expected to take the stand later on Wednesday in the non-jury trial before Kathaleen McCormick, chief judge of the Delaware Chancery Court.

The case will feature testimony from Zuckerberg and other billionaire defendants including former Chief Operating Officer Sheryl Sandberg, venture capitalist and board member Marc Andreessen as well as former board members Peter Thiel, Palantir Technologies (PLTR.O) co-founder, and Reed Hastings, co-founder of Netflix (NFLX.O).

A lawyer for the defendants, who have denied the allegations, declined to comment.

Keep reading

Pentagon Awards Contracts To 4 Artificial Intelligence Developers

The U.S. Department of Defense announced on July 14 that it has awarded contracts to four U.S.-based artificial intelligence (AI) developers to address national security challenges.

Anthropic, Google, OpenAI, and xAI will each receive a contracting award with a ceiling of $200 million, according to a statement shared by the Chief Digital and Artificial Intelligence Office.

The office said these four companies would help “develop agentic AI workflows across a variety of mission areas.”

“Agentic AI” refers to systems designed to operate with minimal human input.

Formed in 2021, the Chief Digital and Artificial Intelligence Office is responsible for speeding up the military’s adoption of AI systems.

OpenAI was the first of the four contract awardees to announce its contract with the Chief Digital and Artificial Intelligence Office.

In June, the company unveiled “OpenAI for Government” and said its first partnership under the new initiative would help the Pentagon office identify and prototype new AI tools for administrative operations.

Anthropic has developed the Claude family of AI chatbots.

In June, Anthropic announced the development of custom “Claude Gov” models intended for national security clients.

The company said agencies operating at the highest level of the U.S. national security sector are already using these AI models.

Formed by billionaire entrepreneur Elon Musk in 2023, xAI serves as a parent to X Corp., which operates the social media platform X. Among its services, xAI has developed the Grok AI chatbot.

On July 14, xAI announced “Grok for Government” and confirmed that the service holds contracts with the Department of Defense and the U.S. General Services Administration.

Google Public Sector Vice President Jim Kelly said in a July 14 blog post that the new AI announcement with the Department of Defense would build on a long-standing partnership between Google and the U.S. military.

Kelly said his company would give the military access to its Cloud Tensor Processing Units, which power Google’s current AI applications.

Keep reading

Google finds custom backdoor being installed on SonicWall network devices

Researchers from the Google Threat Intelligence Group said that hackers are compromising SonicWall Secure Mobile Access (SMA) appliances, which sit at the edge of enterprise networks and manage and secure access by mobile devices.

The targeted devices are end of life, meaning they no longer receive regular updates for stability and security. Despite the status, many organizations continue to rely on them. That has left them prime targets by UNC6148, the name Google has given to the unknown hacking group.

“GTIG recommends that all organizations with SMA appliances perform analysis to determine if they have been compromised,” a report published Wednesday said, using the abbreviation for Google Threat Intelligence Group. “Organizations should acquire disk images for forensic analysis to avoid interference from the rootkit anti-forensic capabilities. Organizations may need to engage with SonicWall to capture disk images from physical appliances.”

Lacking specifics

Many key details remain unknown. For one thing, the attacks are exploiting leaked local administrator credentials on the targeted devices, and so far, no one knows how the credentials were obtained. It’s also not known what vulnerabilities UNC6148 is exploiting. It’s also unclear precisely what the attackers are doing after they take control of a device.

The lack of details is largely the result of the functioning on Overstep, the name of custom backdoor malware UNC6148 is installing after initial compromise of the devices. Overstep allows the attackers to selectively remove log entries, a technique that is hindering forensic investigation. Wednesday’s report also posits that the attackers may be armed with a zero-day exploit, meaning it targets a vulnerability that’s currently publicly unknown. Possible vulnerabilities UNC6148 may be exploiting include:

  • CVE-2021-20038: An unauthenticated remote code execution made possible by a memory corruption vulnerability.
  • CVE-2024-38475: An unauthenticated path traversal vulnerability in Apache HTTP Server, which is present in the SMA 100. It can be exploited to extract two separate SQLite databases that store user account credentials, session tokens, and seed values for generating one-time passwords.
  • CVE-2021-20035: An authenticated remote code execution vulnerability. Security firm Arctic Wolf and SonicWall reported in April that this vulnerability was under active exploitation.
  • CVE-2021-20039: An authenticated remote code execution vulnerability. There have been reports that this vulnerability was under active exploitation to install ransomware in 2024.
  • CVE-2025-32819: An authenticated file deletion vulnerability that can be exploited to cause a targeted device to revert the built-in administrator credentials to a password so that attackers can gain administrator access.

Keep reading

Robots gain human trust with the right kind of eye contact

As humans and robots interact more, eye contact plays a powerful role. It’s more than a gesture – it shapes how we connect and communicate. And as we age, science may offer new ways to keep those social instincts sharp.

A new study reveals something surprising: not just eye contact, but how and when we make it shapes communication. This discovery matters for humans, robots, and the spaces where both interact.

The research was led by Dr. Nathan Caruana, a cognitive neuroscientist in the HAVIC Lab at Flinders University.

Dr. Caruana’s team observed how people communicated during tasks with a virtual partner. The goal was to uncover patterns in gaze that signaled help or intent.

The researchers asked 137 participants to perform a block-building task with a virtual partner. Throughout the task, the experts watched eye movements closely. They studied not just where people looked, but why, when, and in what order.

Keep reading

U.S. SCHOOLS use ILLEGAL SURVEILLANCE TOOLS to monitor students’ digital behavior without consent or knowledge

A newly published peer-reviewed study reveals that a growing number of U.S. schools are using government-funded online surveillance tools to monitor students’ digital behavior—often without their knowledge or consent—and warns that such practices may have serious consequences for children’s development and well-being.

  • 24/7 Student Surveillance Raises Privacy and Health Concerns: A peer-reviewed study found that 12 out of 14 school surveillance companies monitored students’ social media, emails, and online activity around the clock, often without clear consent from parents or students, potentially harming children’s learning, mental health, and social development.
  • Heavy Reliance on AI and Lack of Human Oversight: Most companies used AI to flag student behavior, but fewer than half had human reviewers. Researchers warned this could lead to false positives and discriminatory outcomes, particularly for marginalized students, due to algorithmic bias and lack of transparency.
  • Federal Funds Fuel Poorly Regulated Surveillance Tools: Many schools use federal education grants to fund these surveillance tools, despite limited evidence that they improve student safety. Researchers called for better oversight and questioned whether this is an appropriate use of government resources.
  • Parents Left in the Dark and Policymakers Urged to Act: The study highlighted that parents often don’t know their children are being monitored and may not have opt-out options. Authors recommended federal legislation to improve transparency, address AI bias, and require parental consent for off-campus monitoring.

Keep reading

Who’s Programming the AI, Mr. President?

President Trump’s new legislative centerpiece—the One Big Beautiful Bill—has a promising name and some compelling goals: reduce government bloat, streamline agencies, and modernize operations with cutting-edge technology.

But there’s a problem. A big one.

No one will tell us who’s programming the AI.

This sweeping bill includes a 10-year moratorium on any state or local government regulating artificial intelligence. According to The Washington Post and AP, more than 60 existing state-level laws will be overridden if this provision passes. All regulatory authority over AI—including systems that will be used in law enforcement, healthcare, defense, and finance—will be centralized in the federal government for a decade.

Even worse? The bill empowers the Department of Commerce to deploy “commercial AI” across virtually every federal agency—from the IRS to Homeland Security—according to Indian Express and The Verge.

And yet, no one in the White House or Congress has revealed who is writing the AI code, what datasets it’s trained on, whether it can be independently audited, or whether it’s bound by the U.S. Constitution.

This isn’t just a transparency issue. This is a constitutional crisis in the making.

To be clear, President Trump’s instincts here may be sound. We’ve long needed to shrink the federal leviathan and replace unconstitutional bureaucracies with systems that serve the people—not special interests.

But good intentions won’t protect us from unseen programmers, black-box algorithms, and unaccountable automation.

This bill mandates AI integration across government “to improve efficiency and security.” But efficiency isn’t liberty. Security isn’t sovereignty. And no AI—no matter how “smart”—should be allowed to rewrite, ignore, or reinterpret constitutional rights.

According to Business Insider, the AI moratorium’s stated goal is to “foster innovation” and avoid a “fragmented regulatory landscape.” In reality, it strips states of their authority to protect their citizens from deepfakes, algorithmic bias, digital censorship, and mass surveillance.

This is not governance. This is outsourced tyranny, hidden under the guise of modernization.

Keep reading

Drone Attack Shuts Down Oil Field Run by US Company in Iraqi Kurdistan

A drone attack in Iraqi Kurdistan on Tuesday suspended operations at an oil field operated by a US company, marking the latest in a series of attacks in the region.

HKN Energy, the US firm operating the Sarsang oil field, reported an explosion at 7:00 am local time, followed by a fire. “Operations at the affected facility have been suspended until the site is secured,” the company said.

Workers at the oil field told Rudaw that it was targeted by a drone, and the Kurdistan Regional Government (KRG) denounced the attack as “an act of terrorism against the Kurdistan Region’s vital economic infrastructure.” The US Embassy in Iraq also denounced the attack.

A day earlier, two drones targeted a different oil field in the area, and another was intercepted at the Erbil airport, which houses US troops. The airport has come under attack several times in recent weeks, and so far, there have been no casualties.

No group has taken responsibility for the spate of drone attacks. The KRG has blamed the Popular Mobilization Forces (PMF), a coalition of Iraqi Shia militias that are part of the Iraqi government’s security forces, but Baghdad has denied the accusation.

PMF-affiliated militias have been responsible for previous drone and rocket attacks on US bases in Iraq and Syria, including the more than 100 attacks that occurred in the wake of the October 7, 2023, Hamas attack on Israel, as groups were targeting the US over its support for Israel’s genocidal war in Gaza. The attacks were claimed by a PMF-affiliated group that calls itself the Islamic Resistance in Iraq.

Those attacks culminated in the January 2024 attack on Tower 22, a secretive US base in Jordan near the Syrian border, which killed three US Army Reserve soldiers and wounded dozens of National Guard members. The US launched major airstrikes against the PMF in response, killing 40 people, and assassinated a high-level commander in Kataib Hezbollah, one of the main Iran-aligned militias.

Keep reading