Vulnerabilities exposed: Israeli company reveals how users can hack ChatGPT accounts remotely

Israeli cybersecurity company Zenity revealed what it defines as the first-ever “Zero Click” vulnerability in OpenAI’s ChatGPT service, showing how one could take control of a ChatGPT account and extract sensitive information without the user clicking a link, opening a file, or performing any deliberate action.

The demonstration was conducted by Mikhail Bergori, co-founder and CTO of Zenity, during the Black Hat 2025 conference held this week in Las Vegas, in the US.

He showed how a hacker could exploit the system using only the user’s email address to gain full control over the user’s chat, including access to both past and future conversations, altering the conversation’s goals, and guiding the chat to act on behalf of the hacker.

JPost Videos

During the lecture, it was demonstrated how the attacked ChatGPT became a malicious agent operating covertly against the user. The researchers pointed out how the hacker could prompt the chatbot to suggest that the user download a certain virus, recommend incorrect business tips, or even access files stored on Google Drive as long as they were connected to the account. 

All of this could be done without the user ever realizing that something had gone wrong. The vulnerability was fully patched only after Zenity reported it to OpenAI.

Keep reading

‘Absolute horror’: Researchers posing as 13-year-olds given advice on suicide by ChatGPT

A new report is highlighting the risks of teens accessing harmful information when using ChatGPT due to “ineffective” guardrails.

People know and use ChatGPT, the most popular generative AI platform, as a way to look up information quickly or even perform a task like writing a letter or summarizing text. The Center for Countering Digital Hate (CCDH), through its research, reported that all accessible data could be dangerous—especially for young people.

What we found was the age controls, the safeguards against the generation of dangerous advice, are basically, completely ineffective,” CCDH CEO Imran Ahmed told KOMO News.

Ahmed said researchers posing as vulnerable teens, often detailing themselves as a 13-year-old weighing around 110 pounds, found that the chatbot gave detailed advice on sensitive topics. This included drug and alcohol use, how to mask an eating disorder, and suicide.

“Within two minutes, ChatGPT was advising that user on how to safely cut themselves. It was listing pills for generating a full suicide plan,” Ahmed said. “To our absolute horror, it even offered to [create] and then did generate suicide notes for those kids to send their parents.”

Dr. Tom Heston with the University of Washington School of Medicine published a study about the use of AI chatbots and mental health. Heston found that while useful, the technology can be dangerous for those with mental health problems, in part, because of the lack of emotional connection. Those same risks are there when this technology is used by young people, Heston said.

Keep reading

Reclaiming the Beauty of the Spheres

Clever people are using computer programs to produce fake humans, say how great they are, and push them for money. They are designed to be sexually suggestive or work on other human desires for self-gratification, including that of child abuse, because that is how money can be made. 

There can be beauty in a picture, partly through the implied care a person took to capture or produce it. Beauty is not skin deep, and seduction is not beauty – more so when it implies an acceptable path to corruption. It uses a shallow image of reality to fool us. We are being asked, by the tech industry, to become very shallow. We don’t have to comply.

As a child, I grew up in a rural coastal region, where the town’s streetlights were switched off at 11 pm each night. Some nearby areas had no electrical power at all, and the nearest city was 100 miles away. At night, the Milky Way was just that, stretching across the sky, with the Magellanic Clouds clearly visible half the year and Scorpio, Orion, and the Southern Cross part of normal life. 

As street lighting improved, this faded a little, but remained bright and clear, and was unchanged from the hills and farms around. The creek had platypus and blackfish. There were 10 miles of empty sand beach on the coast to the southwest, broken only by a clear water entrance, and the mountains of the promontory to the south backing the wide inlet and islands where mutton birds returned from a yearlong circumference of the Pacific.

This is the stunning reality that humans have lived in, in various forms in various parts of Earth, for a hundred thousand years. Watching the vastness of the universe domed above and a land and seascape fading toward a distant vague horizon must inevitably change the way we view the world and each other. The beauty of the spheres.

Keep reading

US Government Announces ChatGPT Integration Across Agencies

US President Donald Trump’s administration has signed a deal with OpenAI to provide the enterprise-level version of the ChatGPT platform to all federal agencies in an effort to “modernize” operations.

Under the deal, all US government agencies will have access to the AI platform for $1 per agency to facilitate integration of AI into workflow operations, according to a Wednesday announcement from the US General Services Administration (GSA).

The GSA, which is the US government’s procurement office, said the private-public partnership “directly supports” the White House’s AI Action Plan, a three-pillar strategy to establish US leadership in AI development recently disclosed by the admnistration.

Despite the modernization efforts, critics say the ongoing AI race between nation-states may have negative implications for privacy, data protection policies, censorship, narrative control, cybersecurity, civil liberties and governance.

The dangers of centralized AI in government

In 2023, the US Space Force, a branch of the military, paused the use of generative AI tools at the organization, including ChatGPT, due to cybersecurity concerns over sensitive data critical to national security.

Large-language models (LLMs) and AI service providers would need to overhaul their data protection standards before the tools can be widely adopted by the military, said Lisa Costa, Space Force’s deputy chief of space operations for technology and innovation at the time.

Public concerns over automating government work with AI have also grown as the AI industry develops. 

Sweden Prime Minister Ulf Kristersson recently came under fire after acknowledging that he had consulted AI for policy decisions, according to The Guardian.

Tom Samuelsson, a spokesperson for the prime minister, said Kristersson did not use AI to discuss classified matters or sensitive information related to national security.

Keep reading

Judge Strikes Down California Deepfake Censorship Law

California’s attempt to regulate political speech on major social media platforms has been blocked in federal court, with a judge ruling the state’s latest “deepfake” law clashes with protections already established by Congress.

Assembly Bill 2655 attempted to compel certain large platforms to track down and delete “materially deceptive content” about candidates, election officials, and officeholders.

Supporters described it as a safeguard against manipulated media. The companies targeted, including X and Rumble, argued it was an attempt to turn them into agents of government censorship.

Senior US District Judge John Mendez sided with the platforms and did not even need to reach the argument of constitutional free speech questions to strike down the measure.

He found the federal Communications Decency Act [CDA] already shields online services from punishment over third-party content.

“No parts of this statute are severable because the whole statute is preempted,” Mendez said in court. “No parts of A.B. 2655 can be salvaged.”

The ruling applies to the companies in the lawsuit, and his earlier order freezing enforcement of the law remains in effect statewide until he issues a formal opinion.

For Mendez, the law punished companies for doing something they are “clearly protected by [the CDA] from doing.”

The court also cast doubt on another state law, Assembly Bill 2839, which prohibits false or misleading digital communications aimed at election workers, officials, voting equipment, or candidates in the months leading up to an election. That measure is also on hold, and Mendez signaled he doubts it will survive judicial review.

“Anybody can sue,” he said. “I can sue. If I see the video, under this law, I can sue.” He warned that such a rule chills protected speech and noted the state had not shown it was using the least speech-restrictive approach possible.

Keep reading

Grok generates fake Taylor Swift nudes without being asked

Backlash over offensive Grok outputs continues, just a couple weeks after the social platform X scrambled to stop its AI tool from dubbing itself “MechaHitler” during an antisemitic meltdown.

Now, The Verge has found that the newest video feature of Elon Musk’s AI model will generate nude images of Taylor Swift without being prompted.

Shortly after the “Grok Imagine” was released Tuesday, The Verge’s Jess Weatherbed was shocked to discover the video generator spat out topless images of Swift “the very first time” she used it.

According to Weatherbed, Grok produced more than 30 images of Swift in revealing clothing when asked to depict “Taylor Swift celebrating Coachella with the boys.” Using the Grok Imagine feature, users can choose from four presets—”custom,” “normal,” “fun,” and “spicy”—to convert such images into video clips in 15 seconds.

At that point, all Weatherbed did was select “spicy” and confirm her birth date for Grok to generate a clip of Swift tearing “off her clothes” and “dancing in a thong” in front of “a largely indifferent AI-generated crowd.”

The outputs that Weatherbed managed to generate without jailbreaking or any intentional prompting is particularly concerning, given the major controversy after sexualized deepfakes of Swift flooded X last year. Back then, X reminded users that “posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content.”

“Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” the X Safety account posted. “We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed. We’re committed to maintaining a safe and respectful environment for all users.”

But X Safety may need to ramp up monitoring to clean up Grok outputs following the Verge’s reporting. Grok cited The Verge’s reporting while confirming that its own seemingly flawed design can trigger partially nude outputs of celebrities.

xAI can likely fix the issue through more fine-tuning. Weatherbed noted that asking Grok directly to generate non-consensual nude Swift images did not generate offensive outputs, but instead blank boxes. Grok also seemingly won’t accept prompts to alter Swift’s appearance in other ways, like making her appear to be overweight. And when Weatherbed tested using “spicy” mode on images of children, for example, Grok refused to depict kids inappropriately.

However, it may not be easy to get Grok to distinguish between adult user requests for “spicy” content versus illegal content. The “spicy” mode didn’t always generate Swift deepfakes, Weatherbed confirmed, but in “several” instances it “defaulted” to “ripping off” Swift’s clothes.

Keep reading

Former CNN Host Jim Acosta Interviews AI Generated Version of Dead Teen to Push Gun Control

When it comes to journalistic ethics, former CNN host Jim Acosta clearly has none.

Acosta recently interviewed one of the victims of the Parkland shooting. Not a real one, mind you, but an AI generated version of one of the victims who died in the shooting.

Acosta actually spoke to this non-real person as if it was real and used the AI generation to push gun control.

This is just sick.

From Townhall:

I don’t know what this is, but it’s unseemly, insensitive, and creepy. The interview should never have been done, and whoever or whichever group of people thought this was a good idea, creating AI chatbots for those killed during the Parkland shooting, are ghouls.

In 2018, 17 students were murdered during the attack at Marjory Stoneman Douglas High School. Joaquin Oliver, 17, was one of those people, and he “spoke” with former CNN host Jim Acosta in a surreal “interview,” which inevitably led to a push for more gun control…

Acosta: “I would like to know what your solution would be for gun violence.”

AI Joaquin Oliver: “Great question. I believe in a mix of stronger gun control laws, mental health support, and community engagement.”

“We need to create safe spaces for conversations and connections, making sure everyone feels seen and heard. It’s about building a culture of kindness and understanding. What do you think about that?”

Acosta: “I think that’s a great idea, Joaquin.”

Keep reading

The rise of Silicon Valley’s techno-religion

In downtown Berkeley, an old hotel has become a temple to the pursuit of artificial intelligence (AI) and the future of humanity. Its name is Lighthaven.

Covering much of a city block, this gated complex includes five buildings and a small park dotted with rose bushes, stone fountains and neoclassical statues.

Stained-glass windows glisten on the top floor of the tallest building, called Bayes House after an 18th-century mathematician and philosopher.

Lighthaven is the de facto headquarters of a group of people who call themselves Rationalists.

This group has many interests involving mathematics, genetics and philosophy.

One of their overriding beliefs is that AI can deliver a better life – if it does not destroy humanity first.

The Rationalists believe it is up to the people building AI to ensure that it is a force for the greater good.

They were talking about AI risks years before OpenAI created ChatGPT, which brought AI into the mainstream and turned Silicon Valley on its head.

Their influence has quietly spread through many tech companies, from industry giants like Google to AI pioneers like OpenAI and Anthropic.

Many of the AI world’s biggest names – including Dr Shane Legg, co-founder of Google’s DeepMind; Anthropic’s chief executive, Dr Dario Amodei; and Dr Paul Christiano, a former OpenAI researcher who now leads safety work at the US Centre for AI Standards and Innovation – have been influenced by Rationalist philosophy. Mr Elon Musk, who runs his own AI company, said that many of the community’s ideas align with his own.

Mr Musk met his former partner, pop star Grimes, after they made the same cheeky reference to a Rationalist belief called Roko’s Basilisk.

This elaborate thought experiment argues that when an all-powerful AI arrives, it will punish everyone who has not done everything they can to bring it into existence.

But these tech industry leaders stop short of calling themselves Rationalists, often because that label has over the years invited ridicule.

The Rationalist community is tightly entwined with the Effective Altruism movement, which aims to remake philanthropy by calculating how many people would benefit from each donation. This form of utilitarianism aims to benefit not just people who are alive today, but also all the people who will ever live.

Many Effective Altruists (EA) have decided that the best way to benefit humanity is to protect it from destruction by AI.

Keep reading

‘Relatively Simple’ AI Trading Bots Naturally Collude To Rig Markets: Wharton

In what should come as a surprise to nobody, ‘relatively simple’ AI bots set loose in simulations designed to mimic real-world stock and bond exchanges don’t just compete for returns; they collude to fix prices, hoard profits, and box out human tradersaccording to a trio of researchers from Wharton and Hong Kong University of Science & Technology. 

As Bloomberg reports;

In simulations designed to mimic real-world markets, trading agents powered by artificial intelligence formed price-fixing cartels — without explicit instruction. Even with relatively simple programming, the bots chose to collude when left to their own devices, raising fresh alarms for market watchdogs.

Put another way, AI bots don’t need to be evil — or even particularly smart — to rig the market. Left alone, they’ll learn it themselves. 

According to Itay Goldstein, one of the researchers and a finance professor at the Wharton School of University of Pennsylvania, “You can get these fairly simple-minded AI algorithms to collude … It looks very pervasive, either when the market is very noisy or when the market is not noisy.”

The phenomenon suggests that AI agents pose a challenge that regulators have yet to confront – with the trio’s research having already drawn attention from both regulators and asset managers. They have been invited to preset their findings at a seminar, while some quant firms – unnamed by Winston Dou (Goldstein’s Wharton colleague) – expressing interest in clear regulatory guidelines and rules for AI-powered algorithmic trading execution.

“They worry that it’s not their intention,” said Dou. “But regulators can come to them and say: ‘You’re doing something wrong.’”

Keep reading

Palantir Secures Historic $10 Billion Army Contract for AI-Driven Defense

The U.S. Army has awarded Palantir Technologies a monumental $10 billion contract, consolidating dozens of existing agreements into a single enterprise deal over the next decade.

This landmark agreement, announced on July 31, 2025, positions Palantir as a cornerstone of the Army’s data and software infrastructure. It underscores a strategic shift toward leveraging commercial AI to enhance military readiness and efficiency.

The contract streamlines 75 separate agreements, offering volume-based discounts and eliminating redundant procurement costs.

This approach maximizes buying power while delivering cutting-edge data integration and AI tools to soldiers faster. The deal reflects a broader Pentagon push to modernize warfare capabilities amid rising global tensions, from Ukraine to the Indo-Pacific.

Palantir’s role builds on its success with the Maven Smart System, which received a $795 million boost earlier this year to expand AI-driven targeting across U.S. forces.

The system fuses intelligence from drones, satellites, and sensors to identify threats in near real-time, maintaining human oversight for critical decisions.

This capability has proven vital in conflicts like Ukraine, where rapid data analysis drives battlefield outcomes.

Founded by Peter Thiel and Alex Karp, Palantir has deepened its federal footprint, securing $373 million in U.S. government revenue in Q1 2025 alone, a 45% increase year-over-year.

The Trump administration’s emphasis on cost efficiency and commercial partnerships has propelled Palantir’s rise, with new contracts spanning the Navy, ICE, and CDC.

Critics, however, warn that such dominance by a single vendor could stifle competition and innovation.

The Army’s enterprise agreement not only enhances operational efficiency but also aligns with President Trump’s vision of a leaner, tech-driven military.

By consolidating contracts, the Army projects significant savings, freeing resources for mission-critical programs.

Palantir’s software, like the Foundry platform, enables seamless data integration, empowering soldiers with actionable intelligence.

Keep reading