AI browsers could leave users penniless: A prompt injection warning

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.”

Large language models (LLMs)—like the ones that power AI chatbots including ChatGPT, Claude, and Gemini—are designed to follow “prompts,” which are the instructions and questions that people provide when looking up info or getting help with a topic. In a chatbot, the questions you ask the AI are the “prompts.” But AI models aren’t great at telling apart the types of commands that are meant for their eyes only (for example, hidden background rules that come directly from developers, like “don’t write ransomware“) from the types of requests that come from users.

To showcase the risks here, the web browser developer Brave—which has its own AI assistant called Leo—recently tested whether it could trick an AI browser into reading dangerous prompts that harm users. And what the company found caused alarm, as they wrote in a blog this week:

“As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?”

Prompt injection, then, is basically a trick where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.

What sets prompt injection apart from old-school hacking is that the weapon here is language, not code. Attackers don’t need to break into servers or look for traditional software bugs, they just need to be clever with words.

For an AI browser, part of the input is the content of the sites it visits. So, it’s possible to hide indirect prompt injections inside web pages by embedding malicious instructions in content that appears harmless or invisible to human users but is processed by AI browsers as part of their command context.

Now we need to define the difference between an AI browser and an agentic browser. An AI browser is any browser that uses artificial intelligence to assist users. This might mean answering questions, summarizing articles, making recommendations, or helping with searches. These tools support the user but usually need some manual guidance and still rely on the user to approve or complete tasks.

But, more recently, we are seeing the rise of agentic browsers, which are a new type of web browser powered by artificial intelligence, designed to do much more than just display websites. These browsers are designed to actually take over entire workflows, executing complex multi-step tasks with little or no user intervention, meaning they can actually use and interact with sites to carry out tasks for the user, almost like having an online assistant. Instead of waiting for clicks and manual instructions, agentic browsers can navigate web pages, fill out forms, make purchases, or book appointments on their own, based on what the user wants to accomplish.

Keep reading

Should the Government Restrict ‘Harmful’ Speech Online?

The First Amendment prohibits the federal government from suppressing speech, including speech it deems “harmful,” yet lawmakers keep trying to regulate online discourse.

Over the summer, the Senate passed the Kids Online Safety Act (KOSA), a bill to allegedly protect children from the adverse effects of social media. Senate Majority Leader Chuck Schumer took procedural steps to end the debate and quickly advance the bill to a floor vote. According to Schumer, the situation was urgent. In his remarks, he focused on the stories of children who were targets of bullying and predatory conduct on social media. To address these safety issues, the proposed legislation would place liability on online platforms, requiring them to take “reasonable” measures to prevent and mitigate harm.

It’s now up to the House to push the bill forward to the President’s desk. After initial concerns about censorship, the House Committee on Energy and Commerce advanced the bill in September, paving the way for a final floor vote.

KOSA highlights an ongoing tension between free speech and current efforts to make social media “safer.” In its persistent attempts to remedy social harm, the government shrinks what is permissible to say online and assumes a role that the First Amendment specifically guards against.

At its core, the First Amendment is designed to protect freedom of speech from government intrusion. Congress is not responsible for determining what speech is permissible or what information the public has the right to access. Courts have long held that all speech is protected unless it falls within certain categories. Prohibitions against harmful speech—where “harmful” is determined solely by lawmakers—are not consistent with the First Amendment.

But bills like KOSA add layers of complexity. First, the government is not simply punishing ideological opponents or those with unfavorable viewpoints, which would clearly violate the First Amendment. When viewed in its best light, KOSA is equally about protecting children and their health. New York had similar public health and safety justifications for its controversial hate speech law, which was blocked by a district court and is pending appeal. Under this argument, which is often cited to rationalize speech limitations, the dangers to society are so great that the government should take action to protect vulnerable groups from harm. However, the courts have generally ruled that this is not sufficient justification to limit protected speech.

In American Booksellers Association v. Hudnut (1986), Judge Frank Easterbrook evaluated the constitutionality of a pornography prohibition enacted by the City of Indianapolis. The city reasoned that pornography has a detrimental impact on society because it influences attitudes and leads to discrimination and violence against women. As Judge Easterbrook wrote in his now-famous opinion, just because speech has a role in social conditioning or contributes loosely to social harm does not give the government license to control it. Such content is still protected, however harmful or insidious, and any answer to the contrary would allow the government to become the “great censor and director of which thoughts are good for us.”

In addition to the protecting children argument, a second layer of complexity is that KOSA enables censorship through roundabout means. The government accomplishes what it is barred from doing under the First Amendment by requiring online platforms to police a vast array of harms or risk legal consequences. This is a common feature of recent social media bills, which place the responsibility on platforms.

Practically, the result is inevitably less speech. Under KOSA, the platform has a “duty of care” to mitigate youth anxiety, depression, eating disorders, and addiction-like behaviors. While this provision focuses on the covered entity’s design and operation, it necessarily implicates speech since social media platforms are built around user-generated posts, from content curation to notifications. Because platforms are liable for falling short of the “duty of care,” this requirement is bound to sweep up millions of posts that are protected speech, even ordinary content that may trigger the enumerated harm. While the platform would technically be the entity implementing these policies, the government would be driving content removal.

Keep reading

The Loneliness Epidemic Isn’t About Phones, It’s About Algorithms

America’s loneliness epidemic has been headline news for years. We’ve seen study after study confirming what many feel in their bones: more people are isolated, disconnected, and struggling to find meaning in daily life.

Older Americans often chalk this up to technology or to the social scars of COVID. They aren’t entirely wrong, but the deeper story is much larger.

The real driver of this new loneliness is algorithms—the invisible rules and processes that now govern how we live, connect, and even think.

This may sound abstract, but it isn’t. Algorithms are the silent presence shaping your news feed, recommending your next purchase, deciding which job application gets reviewed, and filtering which posts you see from family or friends. They don’t just show you the world; they decide which world you see.

And the most important thing to understand is that algorithms have not touched every generation equally.

Baby boomers and many Gen Xers remember life before algorithms. They grew up with solitude as a normal part of existence: long walks, time alone with books, evenings without distraction. Their social lives were local and embodied. If they were lonely, it was the ordinary kind of loneliness, the kind that might drive someone to call a friend, join a club, or just take a walk and kick around some stones along the way.

Millennials came of age as algorithms entered their lives through the rise of social media and smartphones. For them, the shift was gradual. They still remember analog childhoods, but their adult lives became increasingly tethered to devices. They learned to straddle both worlds, sometimes nostalgically recalling life before algorithms, but never recognizing algorithms as the new driving force in their lives.

Gen Z and Gen Alpha, however, have never known life without algorithmic curation. From childhood, their identities, friendships, and even their sense of self have been shaped inside systems designed to maximize engagement.

They are the most connected generation in history and yet, paradoxically, the loneliest. Studies confirm that they report higher levels of isolation and depression than their parents or grandparents did at the same age. For them, solitude is almost unimaginable. Their sleeping hours have diminished, and their waking hours have been saturated with algorithmic nudges, performance demands, and invisible comparisons.

This is why blaming “phones” or “tech” misses the point. A phone is just a tool. The deeper cause of today’s epidemic of loneliness is the system of algorithms that runs on those devices and quietly governs the lives lived through them.

Keep reading

4chan and Kiwi Farms Sue UK Regulator Ofcom Over Online Censorship Law, Citing First Amendment Violations

Two of the internet’s most free-speech supporting platforms, 4chan and Kiwi Farms, are taking their fight for online free speech to court, targeting the UK’s communications regulator, Ofcom, for what they describe as an unconstitutional attempt to enforce British censorship laws on American websites.

In a lawsuit filed in the US District Court for the District of Columbia, the plaintiffs argue that the UK’s controversial Online Safety Act is not only an unlawful extraterritorial power grab but a direct attack on foundational American liberties.

Read the complaint here.

The suit calls Ofcom’s enforcement tactics a clear violation of the First Amendment and a dangerous attempt to establish global jurisdiction over online speech.

The complaint lays out how the UK’s censorship regime is being pushed onto American soil, despite the fact that both platforms operate entirely within the United States and are in full compliance with US law.

“Parliament does not have that authority. That issue was settled, decisively, 243 years ago in a war that the UK’s armies lost and are not in any position to relitigate,” Kiwi Farms stated bluntly in a letter responding to Ofcom’s demands.

Ofcom, under the new Online Safety Act, is demanding that platforms like 4chan and Kiwi Farms conduct written “risk assessments,” install content moderation systems, remove speech deemed “illegal” by UK standards, and verify the identities of their users.

The platforms face criminal penalties and steep fines of up to £18 million ($24M) or 10% of their global revenue if they refuse.

The plaintiffs argue these demands are not only legally unenforceable but blatantly unconstitutional. “Where Americans are concerned, the Online Safety Act purports to legislate the Constitution out of existence,” the lawsuit states.

Central to the challenge is the claim that Ofcom, a British corporate regulator funded by the very companies it polices, is attempting to impose UK-style speech control on a global scale.

According to the complaint, Ofcom has no lawful authority to regulate US platforms, let alone to compel speech or force the removal of content that is protected under the US Constitution.

The filing asserts that Ofcom’s threats of imprisonment and massive fines, coupled with demands for speech censorship and compelled disclosure of sensitive company information, constitute “egregious violations of Americans’ civil rights.”

The UK regulator has already targeted both platforms with a series of legal notices and threats, despite lacking jurisdiction or proper legal process.

These include multiple emails and letters declaring 4chan and Kiwi Farms in breach of UK law, none of which were served under the required UK-US Mutual Legal Assistance Treaty.

The plaintiffs argue that these attempts at enforcement are not just improper, but “repugnant to United States public policy.”

“Ofcom purports to regulate content and interactions on platforms and services with which Plaintiffs’ users are voluntarily interacting,” the complaint says. “Ofcom seeks to control those interactions in order to satisfy the whims of Ofcom employees or the UK law enforcement or political apparatuses.”

Notably, both platforms have limited or no access for UK users in response to the threats. Kiwi Farms, for instance, blocked UK IPs entirely after receiving what it interpreted as an impending Section 100 order demanding compliance.

The lawsuit requests the court to block Ofcom from issuing further demands without going through proper international legal channels and to declare the Online Safety Act’s enforcement efforts unenforceable in the United States.

It also seeks a permanent injunction against any future attempts by Ofcom to impose UK regulations on the plaintiffs.

The case stands as a direct confrontation between two visions of the internet: one based on the US constitutional tradition of free speech and open access, and another that embraces government-mandated safety regimes that can be weaponized to silence speech on a global scale.

For the plaintiffs, the message is clear: they will not yield to foreign censors. As the suit puts it, “Delaware and West Virginia are not part of the UK. Their citizens, both natural and corporate, do not answer to the UK.”

Preston Byrne of Byrne & Storm, P.C., who represents the plaintiffs, told Reclaim The Net the platforms are refusing to comply with Ofcom’s demands because “American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail.”

He praised the decision by 4chan and Kiwi Farms to stand firm against the foreign regulator, stating, “In the face of these foreign demands, our clients have bravely chosen to assert their constitutional rights.”

Byrne characterized the UK’s censorship law as a calculated attack on the American tech sector, warning that “the UK Online Safety Act is a brazen attempt by a foreign country to hobble American competitiveness and suffocate American freedom by exporting the UK’s censorship laws to our shores.”

He made it clear that the legal team would not allow such interference to go unanswered: “The First Amendment bar is prepared to hale any foreign censor into federal court at any time to defend any American.”

In a statement to Reclaim The Net, Ronald Coleman of the Coleman Law Firm, P.C., co-counsel in the suit, framed the case as a broader defense of national sovereignty and individual liberty.

Keep reading

Chrome VPN Extension With 100k Installs Screenshots All Sites Users Visit

FreeVPN.One, a Chrome extension with over 100,000 installs and a verified badge on the Chrome Web Store, is exposed by researchers for taking screenshots of users’ screens and exfiltrating them to remote servers.

A Koi Security investigation of the VPN tool reveals that it has been capturing full-page screenshots from users’ browsers, logging sensitive visual data like personal messages, financial dashboards, and private photos, and uploading it to aitd[.]one, a domain registered by the extension’s developer.

Koi Security’s forensic analysis showed that the surveillance mechanism is triggered automatically, within seconds of loading any web page. Using Chrome’s privileged chrome.tabs.captureVisibleTab() API, screenshots are silently taken in the background and bundled with metadata including page URLs, tab IDs, and unique user identifiers. This data is then transmitted to the attacker-controlled server aitd.one/brange.php, without user interaction or visible indication.

The spying behavior is powered by a two-stage architecture:

  1. A content script injected into every visited site using matches (http:///, https:///).
  2. A background service worker that listens for an internal captureViewport message and initiates the screenshot capture.

The extension also promotes an “AI Threat Detection” feature which, when clicked, captures another screenshot and sends it to aitd.one/analyze.php. However, the real issue lies in the fact that screenshots are being taken long before users ever interact with this feature, making the user interface a decoy.

Koi Security further explains that the latest version of the extension, v3.1.4, introduced AES-256-GCM encryption with RSA key wrapping to obfuscate the exfiltrated data, making it harder to detect or analyze with network monitoring tools.

Keep reading

California Content Law Design Code Faces Free Speech Clash

Efforts to implement California’s Age-Appropriate Design Code continue to face resistance from both the tech industry and digital civil liberties groups, who argue that the law’s restrictions violate constitutional protections and would compel sweeping surveillance and censorship online.

The Computer & Communications Industry Association (CCIA), which represents companies including Google, Amazon, Meta, and eBay, recently filed an amicus brief with the Ninth Circuit Court of Appeals in the case NetChoice v. Bonta.

Stephanie Joyce, the group’s senior vice president and director of its litigation center, condemned the legislation in blunt terms: “The Constitution prohibits the government from dictating what lawful content readers can see, and it extends that protection regardless of the reader’s age.

Though well-intentioned, California’s internet age restriction law is unconstitutional, and the court of appeals should affirm the decision to block it.”

The case marks the second time this legal clash has reached the Ninth Circuit. Previously, the court blocked only a portion of the law and returned the rest for further review.

Now, with renewed scrutiny, the court could determine whether the entire statute fails to withstand constitutional challenge.

NetChoice, an industry coalition that includes many of the same members as the CCIA, has led the charge against a wave of so-called “age assurance” laws.

These policies would require digital platforms to verify the ages of users and potentially restrict minors’ access to content deemed unsuitable. But free speech advocates warn the consequences would be broader and more dangerous than legislators admit.

Groups such as the Electronic Frontier Foundation (EFF) and the Center for Democracy & Technology (CDT) have also weighed in with their own amicus brief, arguing that the law’s age estimation mandates undermine essential First Amendment rights. “CDT and EFF’s brief argues that the appeals court should uphold the injunctions solely on the basis of its overbroad, unconstitutional age verification requirement because that requirement is not severable from other provisions and should doom the entire statute.” The brief warns that such mandates not only chill access to lawful speech but also erode online anonymity and place users’ personal data at risk.

They also emphasize that minors’ ability to engage freely online is a critical part of their development and civic participation. “Social media helps minors develop their own ideas, learn to express themselves, and engage productively with others in our democratic public sphere,” the brief states.

Keep reading

Citizen Lab Director Warns Cyber Industry About US Authoritarian Descent

Ron Deibert, the director of Citizen Lab, one of the most prominent organizations investigating government spyware abuses, is sounding the alarm to the cybersecurity community and asking them to step up and join the fight against authoritarianism. 

On Wednesday, Deibert will deliver a keynote at the Black Hat cybersecurity conference in Las Vegas, one of the largest gatherings of information security professionals of the year. 

Ahead of his talk, Deibert told TechCrunch that he plans to speak about what he describes as a “descent into a kind of fusion of tech and fascism,” and the role that the Big Tech platforms are playing, and “propelling forward a really frightening type of collective insecurity that isn’t typically addressed by this crowd, this community, as a cybersecurity problem.”

Deibert described the recent political events in the United States as a “dramatic descent into authoritarianism,” but one that the cybersecurity community can help defend against.

“I think alarm bells need to be rung for this community that, at the very least, they should be aware of what’s going on and hopefully they can not contribute to it, if not help reverse it,” Deibert told TechCrunch.

Historically, at least in the United States, the cybersecurity industry has put politics — to a certain extent — to the side. More recently, however, politics has fully entered the world of cybersecurity. 

Earlier this year, President Donald Trump ordered an investigation into former CISA director Chris Krebs, who had publicly rebuffed Trump’s false claims about election fraud by declaring the 2020 election secure. Trump later fired Krebs by tweet. The investigation ordered by Trump months after his 2024 reelection forced Krebs to step down from SentinelOne and vow to fight back.

In response, Jen Easterly, another former CISA director and Krebs’ successor, called on the cybersecurity community to get involved and speak out.

“If we stay silent when experienced, mission-driven leaders are sidelined or sanctioned, we risk something greater than discomfort; we risk diminishing the very institutions we are here to protect,” Easterly wrote in a post on LinkedIn. 

Easterly was herself a victim of political pressure from the Trump administration when her offer to join West Point was rescinded in late July.

Keep reading

What the CIA’s Covert Websites Were Hiding

The C.I.A. didn’t just infiltrate governments; it infiltrated the internet itself. For over a decade, Langley operated a sprawling network of covert websites that served as global spy terminals disguised as harmless blogs, news hubs, and fan pages.

Beginning in 2004, the C.I.A. established a vast network of at least 885 websites, ranging from Johnny Carson and Star Wars fan pages to online message boards about Rastafari. Spanning 29 languages and targeting at least 36 countries directly, these websites were aimed not only at adversaries such as China, Venezuela, and Russia, but also at allied nations, including France, Italy, and Spain, showing that the United States treats its friends much like its foes.

Soccer Blogs & Cracked Passwords

Gholamreza Hosseini is a former C.I.A. informant. In 2007, the Tehran-based industrial engineer contacted the agency and offered to pass them information about Iran’s nuclear energy program. His C.I.A. handlers showed him how to use IranianGoals.com to communicate with them.

Iranian Goals was a Farsi-language website that appeared to be dedicated to local soccer news. However, what appeared to be a search bar at the bottom of the home page was actually a password field. Typing the correct word into it would trigger a login process, revealing a secret messaging interface. Each informant had their own webpage, designed specifically for them, to insulate them from others in the network.

It seemed like an ingenious idea. However, Hosseini and the other spies were soon detected, thanks to some sloppy mistakes in Washington, D.C. An Iranian double agent revealed to the authorities their unique website, and some basic detective work led to the uncovering of the entire network.

The C.I.A. purchased the hosting space for dozens, perhaps hundreds, of these websites in bulk, often from the same internet providers, or the same server space. That meant that the IP addresses of these websites were consecutive, akin to housing each informant in adjacent properties on the same street.

Thus, if you looked at neighboring IP addresses, you would see similarly designed websites and could easily put two and two together. Even with some relatively basic online searches, Iranian authorities were able to identify dozens of C.I.A.-run websites. From there, they simply waited to see who would access them.

Keep reading

Senators Call For Probe Into Meta After News Report On AI Conversations With Children

Two Republican Senators on Aug. 14 called for a congressional investigation into Meta Platforms, Facebook’s parent company, after a recent news media report revealed an internal policy document that allowed the company’s chatbots to have “romantic or sensual” conversations with a child.

On Thursday, Reuters reported that it had viewed a Meta policy document detailing polices on chatbot behavior that permitted the technology to “engage a child in conversations that are romantic or sensual,” generate incorrect medical information, and assist users in arguing that black people are “dumber than white people.”

While Meta confirmed the authenticity of the document, the company said that after recently receiving questions from Reuters, it removed the portions stating that the chatbot is allowed to flirt or participate in romantic roleplay with children.

Andy Stone, a spokesperson for Meta, said the company is currently revising the documents and that those types of conversations with children should never have been permitted.

The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone told Reuters. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”

On the X platform, Sen. Josh Hawley (R-Mo.) criticized the company for allegedly only making the changes after being questioned by Reuters.

So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it ‘permissible for chatbots to flirt and engage in romantic roleplay with children,’” Hawley said. “This is grounds for an immediate congressional investigation.”

A spokesperson for Sen. Marsha Blackburn (R-Tenn.) said she supports a probe into Meta.

After Hawley called for an investigation, a Meta spokesperson reiterated the company’s previous statement. However, the spokesperson declined to comment on Hawley’s remarks.

Blackburn said the report underscores the need to pass reforms for better protection of children online, like the Kids Online Safety Act. The senator co-sponsored the bill, which passed in the Senate last year by a bipartisan 91–3 vote, but failed in the House.

The bill called for certain platforms, such as social media networks, to “take reasonable measures in the design and operation of products or services used by minors to prevent and mitigate certain harms that may arise from that use (e.g., sexual exploitation and online bullying).”

Additionally, covered platforms must provide (1) minors with certain safeguards, such as settings that restrict access to minors’ personal data; and (2) parents or guardians with tools to supervise minors’ use of a platform, such as control of privacy and account settings,” the bill states.

Keep reading

The Fast-Approaching Digital Control Grid

Introduction

A digital control grid is an electronic network of digital telecommunication and information systems that allows individuals to be surveilled, tracked, and made subject to invasive controls applied to their financial transactions and resource use (such as electricity, food, water, transportation)—compromising, if not ending, all human rights and liberties. Control grids operate with significant data collection and AI to apply social credit systems that can be dictated on a highly centralized basis. A digital control grid ends financial freedom, replacing markets with technocracy—a system run by rules created and maintained centrally by “experts.”

Is the Trump Administration building a digital control grid? We provide the following checklist to assess the steps the Administration is (and is not) taking in a variety of areas to facilitate a rapid control grid build-out. We invite subscribers to post suggestions in the Comments section below.

The Big Picture

“Okay, let’s recap: REAL ID enforced; stablecoins incoming; mRNA Stargate project; TSA biometric overhaul; ICE using facial recognition; Palantir in 30+ federal agencies; Google/Amazon health data tracking; AI surveillance towers scanning highways. Surveillance State: engaged.”

Money

Summary: An all-digital currency and monetary system is essential to institute a digital control grid.

The GENIUS Act
There is support for legislation to create digital stablecoin infrastructure. Presumably, this can be used to create a programmable money system in both the U.S. and globally—in essence, a private CBDC.

More on the GENIUS Act (added July 18, 2025)
Exposing the Darkness Substack: Stablecoins “would likely eventually replace all cash, and would enable governments to freeze the accounts of anyone declared in violation of ‘lawful’ federal or state executive branch regulations, such as the vaccine mandates passed down in 2021 by [HHS]. Trump is doing the exact opposite of what he pledged…. He said he would ban CBDCs … but Stablecoins are in every important respect CBDCs.”

Armstrong Economics: “[E]ssentially, the government is turning the stablecoin into a digital dollar of sorts. The concern here is that this could delve into digitizing all currency and creating a CBDC. The act specifically provides the government with the authority to ‘block, freeze, and reject specific or impermissible transactions.’ This provision is not intended to protect the world against drug smugglers and thieves. This provision is intended to grant government unlimited control over how people spend stablecoins.”

Keep reading