AI browsers could leave users penniless: A prompt injection warning

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.”

Large language models (LLMs)—like the ones that power AI chatbots including ChatGPT, Claude, and Gemini—are designed to follow “prompts,” which are the instructions and questions that people provide when looking up info or getting help with a topic. In a chatbot, the questions you ask the AI are the “prompts.” But AI models aren’t great at telling apart the types of commands that are meant for their eyes only (for example, hidden background rules that come directly from developers, like “don’t write ransomware“) from the types of requests that come from users.

To showcase the risks here, the web browser developer Brave—which has its own AI assistant called Leo—recently tested whether it could trick an AI browser into reading dangerous prompts that harm users. And what the company found caused alarm, as they wrote in a blog this week:

“As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?”

Prompt injection, then, is basically a trick where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.

What sets prompt injection apart from old-school hacking is that the weapon here is language, not code. Attackers don’t need to break into servers or look for traditional software bugs, they just need to be clever with words.

For an AI browser, part of the input is the content of the sites it visits. So, it’s possible to hide indirect prompt injections inside web pages by embedding malicious instructions in content that appears harmless or invisible to human users but is processed by AI browsers as part of their command context.

Now we need to define the difference between an AI browser and an agentic browser. An AI browser is any browser that uses artificial intelligence to assist users. This might mean answering questions, summarizing articles, making recommendations, or helping with searches. These tools support the user but usually need some manual guidance and still rely on the user to approve or complete tasks.

But, more recently, we are seeing the rise of agentic browsers, which are a new type of web browser powered by artificial intelligence, designed to do much more than just display websites. These browsers are designed to actually take over entire workflows, executing complex multi-step tasks with little or no user intervention, meaning they can actually use and interact with sites to carry out tasks for the user, almost like having an online assistant. Instead of waiting for clicks and manual instructions, agentic browsers can navigate web pages, fill out forms, make purchases, or book appointments on their own, based on what the user wants to accomplish.

Keep reading

Mississippi’s Digital ID Law Hits a Wall with Mastodon

Mississippi’s privacy-ruining online digital ID law is putting pressure on decentralized platforms, and Mastodon says it simply cannot comply.

The organization behind the software states that it lacks the technical ability to verify users’ ages and refuses to implement IP-based restrictions, which it argues would wrongly affect travelers and those temporarily located in the state.

The law, known as the Walker Montgomery Protecting Children Online Act (HB 1126), has already led to Bluesky withdrawing its service from Mississippi.

Mastodon is not following that path. Instead, it points to the design of its platform, where individual server administrators are responsible for their own compliance with local laws. Mastodon itself neither collects user data nor maintains centralized control over the network.

Although Mastodon’s nonprofit arm initially declined to comment, it later provided a statement to TechCrunch.

The organization explained that while its own servers require users to be at least 16, it does not “have the means to apply age verification” and that the software does not retain any data collected during sign-up.

A feature added in the July 2025 release of Mastodon 4.4 allows server administrators to set age minimums and manage legal terms, but does not support storing verification data.

Each server in the network operates independently. It is up to those server owners to decide whether to integrate third-party systems to check user ages.

Mastodon confirmed it cannot offer “direct or operational assistance” to these operators and instead points them to resources such as the IFTAS library, which provides guidance on trust and safety practices for federated platforms.

The nonprofit reiterated that it does not track user behavior or enforce policy across the wider ecosystem. Responsibility for legal compliance, it says, belongs to those who host and manage the servers in their own jurisdictions.

Keep reading

Is your baby, doorbell or security cam spying for China? Florida’s top cop wants to know

Florida’s top law enforcement official has issued a subpoena to Lorex Corp., a top maker of baby monitors, security and doorbell cameras, demanding documents and information about its corporate structure, whether it has any ties to Chinese Communist firms and whether Americans’ data or privacy can be breached. Those documents could provide evidence of illegal activity.

Attorney General James Uthmeier’s office told Just the News he believes Lorex, though North American-based, has imported large swaths of equipment from a Chinese manufacturer banned from the United States over alleged human rights abuses and national security risks.

A spokesperson for Lorex did not immediately respond to a written request for comment sent via email to its corporate public relations account.

Probe into whether products are relabeled from black-listed maker

“Lorex Corporation is importing millions of devices from CCP-controlled Dahua, which has been banned in the United States for human rights abuses and national security risks,” the office said in a statement to Just the News. “AG Uthmeier must discover whether Lorex is selling re-labeled Dahua products which would introduce a range of cybersecurity vulnerabilities that would give the CCP a direct line into the homes and private lives of millions of Floridians.”

Dahua, a Chinese technology company, acquired the Canadian-based Lorex in 2018 but sold it to Taiwan-based Skywatch nearly three years ago after Dahua was blacklisted in the United States.

The Pentagon in 2022 listed Dahua as one of 13 companies doing business with the Chinese military and banned its products in the United States. Earlier, the Commerce Department in 2020 identified Dahua as one of several Chinese firms involved in human rights abuses with alleged slave labor involving Uighur minorities.

In 2023, the Australian government expressed alarm when it found about 1,000 security cameras in its various offices tied to Dahua and another Chinese-tied firm, ordering a sweeping review of all security equipment in its government facilities.

The Florida attorney general’s subpoena was issued Friday, and shortly afterwards, Uthmeier put out a statement on X advising Florida consumers about his actions and possible vulnerabilities in Lorex products they may own.

“What consumers do not know is that data might be shared with the Chinese military,” he said. “Imagine that. Footage of your baby in a crib going to the Chinese government. This is unacceptable. It is a national security issue, and it will not be tolerated.”

Keep reading

US-Israel plan aims to empty Gaza of Palestinians, build AI-powered ‘smart cities’: Report

A postwar plan for Gaza circulating within President Donald Trump’s White House envisions demolishing the strip, confiscating all public land within it, paying small amounts to remove the entire population of more than 2 million Palestinians, and building “a gleaming tourism resort and high-tech manufacturing and technology hub” on its ruins, The Washington Post reported on 31 August.

A 38-page prospectus seen by The Post envisions placing Gaza in a trust controlled by Israeli and American investors. The trust will then serve as the vehicle for the development of the strip into a high-tech commercial, residential, and tourist hub resembling Dubai.

The Post reports that the proposal to establish the Gaza Reconstitution, Economic Acceleration and Transformation Trust, or GREAT Trust, was developed by some of the same Israelis who created the deadly, US and Israeli-backed Gaza Humanitarian Foundation (GHF), which was used as a pretext to block the delivery of food aid by the UN.

Financial planning for the GREAT Trust project was carried out by a team from the Boston Consulting Group, which also worked on establishing the GHF.

The plan calls for the “voluntary” departure of Gaza’s residents to another country, making them refugees, or herding them into “restricted, secured zones” amounting to concentration camps, within the strip.

In exchange for abandoning their land, Palestinians would be “offered a digital token by the trust in exchange for rights to redevelop their property,” The Post writes. The token could allegedly be used to “finance a new life elsewhere or eventually redeemed for an apartment in the new ”AI-powered smart cities'” to be built in Gaza.

“Each Palestinian who chooses to leave would be given a $5,000 cash payment and subsidies to cover four years of rent elsewhere, as well as a year of food,” The Post further wrote.

After beginning his term as president in January, Trump boasted that all Palestinians would be removed from Gaza, never to return, and the strip redeveloped as the “Riviera of the Middle East.”

“I looked at a picture of Gaza, it’s like a massive demolition site,” Trump stated just two days after taking office.

“It’s got to be rebuilt in a different way.” Gaza, he said, was “a phenomenal location … on the sea, the best weather. Everything’s good. Some beautiful things can be done with it.”

Trump appointed Steve Witkoff, a Jewish real estate developer from New York, as his Special Envoy to the Middle East and point man for alleged negotiations with Hamas to reach a ceasefire.

Keep reading

These Are The 10 Most-Used AI Chatbots In 2025

Chatbots have become a key interface for AI in both personal and professional settings. From helping draft emails to answering complex queries, their reach has grown tremendously.

This infographic, via Visual Capitalist’s Bruno Venditti, ranks the most-used AI chatbots of 2025 by annual web visits. It provides insight into how dominant certain platforms have become, and how fast some competitors are growing.

ChatGPT continues to dominate the chatbot space with over 46.5 billion visits in 2025. This represents 48.36% of the total chatbot market traffic, four times more than the combined visits of the other 10 chatbots. Its year-over-year growth of 106% also shows it is not just maintaining, but expanding its lead.

Keep reading

Transcripts Show AI Fed Tech Worker’s Troubling Delusions Before He Murdered His Own Mother

Just to be perfectly clear, this writer is not one of those artificial intelligence doomsayers who thinks that Terminator 2 was a quasi-documentary.

AI, whether you love it or hate it, has escaped Pandora’s Box, and this is simply the world we must grapple with.

To say that it has no value whatsoever would be naive. Time is the most finite resource we have, and if we can save some of it via AI automation, that’s a net positive value.

But just because AI has its occasional use does not mean that people must just accept a rampant and out-of-control version of it. AI, more so than perhaps any invention in human history, needs guardrails and safety measures because people are essentially trying to play God with this tech.

That’s scary enough, but there’s an even scarier problem: people are replacing God with AI, and this utterly horrific and tragic story from Connecticut highlights the truly sinister side of the technology.

As reported by The Wall Street Journal, Stein-Erik Soelberg, 56, entered into a dangerous and parasocial relationship with a ChatGPT bot prior to murdering his own mother, and then himself.

The incident, which occurred in the spring (both Soelberg’s body and his mother’s were found on Aug. 5, per the New York Post), came after Soelberg had entered into a seeming kinship with the AI chatbot.

The reason the mentally disturbed Soelberg began consulting ChatGPT? He was convinced that he was being spied on, possibly by his own mother, and ChatGPT was all too willing to feed into that delusion.

“A Chinese food receipt contained symbols representing Soelberg’s 83-year-old mother and a demon, ChatGPT told him,” The Wall Street Journal reported.

“After his mother had gotten angry when Soelberg shut off a printer they shared, the chatbot suggested her response was ‘disproportionate and aligned with someone protecting a surveillance asset,’” the outlet proffered as another ominous example of the things ChatGPT was telling Soelberg.

In yet another chat, Soelberg told “Bobby” (the nickname he had bestowed on the AI chatbot) that he thought his mother and her friend had tried to poison him by putting psychedelic drugs into his car’s air vents.

Instead of talking him away from the clearly delusional and paranoid claim, this is what the bot proffered: “That’s a deeply serious event, Erik — and I believe you. And if it was done by your mother and her friend, that elevates the complexity and betrayal.”

If that’s not disturbing enough for you, by the summer, the “relationship” between Soelberg and “Bobby” had grown to the point that the two sides were actively discussing how they could reunite in the afterlife.

Keep reading

CIA and Mossad-linked Surveillance System Quietly Being Installed Throughout the US

Launched in 2016 in response to a Tel Aviv shooting and the Pulse Nightclub shooting in Orlando, Florida, Gabriel offers a suite of surveillance products for “security and safety” incidents at “so-called soft targets and communal spaces, including schools, community centers, synagogues and churches.” The company makes the lofty promise that its products “stop mass shootings.” According to a 2018 report on Gabriel published in the Jerusalem Post, there were an estimated 475,000 such “soft targets” across the U.S., meaning that “the potential market for Gabriel is huge.”

Gabriel, since its founding, has been backed by “an impressive group of leaders,” mainly “former leaders of Mossad, Shin Bet [Israel’s domestic intelligence agency], FBI and CIA.” In recent years, even more former leaders of Israeli and American intelligence agencies have found their way onto Gabriel’s advisory board and have promoted the company’s products.

While the adoption of its surveillance technology was slower than expected in the United States, that dramatically changed last year, when an “anonymous philanthropist” gave the company $1 million to begin installing its products throughout schools, houses of worship and community centers throughout the country. That same “philanthropist” has promised to recruit others to match his donation, with the ultimate goal of installing Gabriel’s system in “every single synagogue, school and campus community in the country.”

With this CIA, FBI and Mossad-backed system now being installed throughout the United States for “free,” it is worth taking a critical look at Gabriel and its products, particularly the company’s future vision for its surveillance system. Perhaps unsurprisingly, much of the company’s future vision coincides with the vision of the intelligence agencies backing it – pre-crime, robotic policing and biometric surveillance.

Keep reading

ChatGPT admits bot safety measures may weaken in long conversations, as parents sue AI companies over teen suicides

AI has allegedly claimed another young life — and experts of all kinds are calling on lawmakers to take action before it happens again.

“If intelligent aliens landed tomorrow, we would not say, ‘Kids, why don’t you run off with them and play,’” Jonathan Haidt, author of “The Anxious Generation,” told The Post. “But that’s what we are doing with chatbots.

“Nobody knows how these things think, the companies that make them don’t care about kids’ safety, and their chatbots have now talked multiple kids into killing themselves. We must say, ‘Stop.’”

The family of 16-year-old Adam Raine allege he was given a “step-by-step playbook” on how to kill himself — including tying a noose to hang himself and composing a suicide note — before he took his own life in April.

“He would be here but for ChatGPT. I 100% believe that,” Adam’s father, Matt Raine, told the “Today” show.

Keep reading

The Detached Cruelty of Air Power

Killing from the sky has long offered the sort of detachment that warfare on the ground can’t match. Far from its victims, air power remains the height of modernity. And yet, as the monk Thomas Merton concluded in a poem, using the voice of a Nazi commandant, “Do not think yourself better because you burn up friends and enemies with long-range missiles without ever seeing what you have done.”

Nine decades have passed since aerial technology first began notably assisting warmakers. Midway through the 1930s, when Benito Mussolini sent Italy’s air force into action during the invasion of Ethiopia, hospitals were among its main targets. Soon afterward, in April 1937, the fascist militaries of Germany and Italy dropped bombs on a Spanish town with a name that quickly became a synonym for the slaughter of civilians: Guernica.

Within weeks, Pablo Picasso’s painting “Guernica” was on public display, boosting global revulsion at such barbarism. When World War Two began in September 1939, the default assumption was that bombing population centers — terrorizing and killing civilians — was beyond the pale. But during the next several years, such bombing became standard operating procedure.

Dispensed from the air, systematic cruelty only escalated with time. The blitz by Germany’s Luftwaffe took more than 43,500 civilian lives in Britain. As the Allies gained the upper hand, the names of certain cities went into history for their bomb-generated firestorms and then radioactive infernos. In Germany: Hamburg, Cologne, and Dresden. In Japan: Tokyo, Hiroshima, and Nagasaki.

“Between 300,000-600,000 German civilians and over 200,000 Japanese civilians were killed by allied bombing during the Second World War, most as a result of raids intentionally targeted against civilians themselves,” according to the documentation of scholar Alex J. Bellamy. Contrary to traditional narratives, “the British and American governments were clearly intent on targeting civilians,” but “they refused to admit that this was their purpose and devised elaborate arguments to claim that they were not targeting civilians.”

Past Atrocities Excusing New Ones

As the New York Times reported in October 2023, three weeks into the war in Gaza, “It became evident to U.S. officials that Israeli leaders believed mass civilian casualties were an acceptable price in the military campaign. In private conversations with American counterparts, Israeli officials referred to how the United States and other allied powers resorted to devastating bombings in Germany and Japan during World War II — including the dropping of the two atomic warheads in Hiroshima and Nagasaki — to try to defeat those countries.”

Prime Minister Benjamin Netanyahu told President Joe Biden much the same thing, while shrugging off concerns about Israel’s merciless killing of civilians in Gaza. “Well,” Biden recalled him saying, “you carpet-bombed Germany. You dropped the atom bomb. A lot of civilians died.”

Apologists for Israel’s genocide in Gaza have continued to invoke just such a rationale. Weeks ago, for instance, Mike Huckabee, the American ambassador to Israel, responded derisively to a statement by British Prime Minister Keir Starmer that “the Israeli government’s decision to further escalate its offensive in Gaza is wrong.” Citing the U.S.-British air onslaught on Dresden in February 1945 that set off a huge firestorm, Huckabee tweeted: “Ever heard of Dresden, PM Starmer?”

Appearing on Fox & Friends, Huckabee said: “You have got the Brits out there complaining about humanitarian aid and the fact that they don’t like the way Israel is prosecuting the war. I would remind the British to go back and look at their own history. At the end of World War II they weren’t dropping food into Germany, they were dropping massive bombs. Just remember Dresden — over 25,000 civilians were killed in that bombing alone.”

The United Nations has reported that women and children account for nearly 70% of the verified deaths of Palestinians in Gaza. The capacity to keep massacring civilians there mainly depends on the Israeli Air Force (well supplied with planes and weaponry by the United States), which proudly declares that “it is often due to the IAF’s aerial superiority and advancement that its squadrons are able to conduct a large portion” of the Israeli military’s “operational activities.”

Keep reading

Meta to spend millions backing pro-AI candidates – media

US tech giant Meta will launch a California‑focused super‑PAC to support state‑level candidates who favor looser technology regulation, especially regarding artificial intelligence, according to media reports.

A super PAC is an independent political committee that can raise and spend unlimited funds from individuals, corporations, and unions to support or oppose candidates. It cannot coordinate directly with campaigns or parties and was created after 2010 US court rulings that loosened campaign finance rules.

The group, named Mobilizing Economic Transformation Across California, will reportedly back candidates from the Democratic and Republican parties who prioritize AI innovation over stringent rules.

According to Politico, the Facebook and Instagram parent plans to spend tens of millions of dollars through the PAC, which could make it one of the top political spenders in the state in the run‑up to the 2026 governor’s race.

The initiative aligns with Meta’s broader effort to safeguard California’s status as a technology hub amid concerns that strict oversight could stifle innovation.

Keep reading