Military Pursues AI Systems To Suppress Online Dissent Abroad

The U.S. military wants artificial intelligence to do what human propagandists cannot: create and spread influence campaigns at internet speed while systematically suppressing opposition voices abroad, according to internal Pentagon documents obtained by The Intercept.

The classified wishlist reveals SOCOM’s ambition to deploy “agentic AI or multi-LLM agent systems” that can “influence foreign target audiences” and “suppress dissenting arguments” with minimal human oversight. The military branch seeks contractors who can provide automated systems that operate at unprecedented scale and speed.

“The information environment moves too fast for military remembers [sic] to adequately engage and influence an audience on the internet,” the document said.

“Having a program built to support our objectives can enable us to control narratives and influence audiences in real time.”

As reported by The Intercept, the proposed AI systems would extend far beyond simple content generation. SOCOM envisions technology that can “scrape the information environment, analyze the situation and respond with messages that are in line with MISO objectives.” More controversially, the systems would “suppress dissenting arguments” and “access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages.”

The Pentagon plans to use these capabilities for comprehensive social manipulation, creating “comprehensive models of entire societies to enable MISO planners to use these models to experiment or test various multiple scenarios.”

The systems would generate targeted messaging designed to “influence that specific individual or group” based on gathered intelligence.

SOCOM spokesperson Dan Lessard reportedly defended the initiative, declaring that “all AI-enabled capabilities are developed and employed under the Department of Defense’s Responsible AI framework, which ensures accountability and transparency by requiring human oversight and decision-making.”

The Pentagon’s move comes as adversaries deploy similar technology. Chinese firm GoLaxy has developed AI systems that can “reshape and influence public opinion on behalf of the Chinese government,” according to recent reporting by The New York Times. The company has “undertaken influence campaigns in Hong Kong and Taiwan, and collected data on members of Congress and other influential Americans.”

Keep reading

Trump gives green light for $2m ICE deal with notorious Israeli spyware company

The Trump administration appears to have unfrozen a stalled $2 million Biden-era contract with Paragon Solutions (US) Inc., a spyware company founded in Israel whose products have been accused of facilitating the surveillance of journalists and activists.

On Saturday, a public procurement database showed that a stop work order on the September 2024 deal with U.S. Immigration and Customs Enforcement had been lifted, technology journalist Jack Poulson reported on his All-Source Intelligence Substack.

The deal does not specify what ICE will be getting as part of the deal, beyond describing an agreement for a “fully configured proprietary solution including license, hardware, warranty, maintenance, and training.”

An individual who answered a phone number listed for Paragon on the contract declined to comment.

Keep reading

More Age Verification Fallout: Artist Blogs Blocked, Porn Data Leaked, Traffic Boosts for Noncompliant Sites

As more places around the world—including U.S. states—pass laws requiring age checks around the internet, we’re continuing to see a slew of unintended (but entirely predictable) consequences. The latest round includes some U.S. residents being blocked from a blogging platform, French folks in dangers of their porn viewing habits being leaked, and porn websites that violate the law in the U.K. being rewarded with big boosts in web traffic.

Let’s start closest to home.

Another website is blocking access to Mississippi residents in response to the state’s age verification and online harm prevention law taking effect.

We’ve already seen some fallout from this law, including the social media platform Bluesky beginning to block Mississippi residents.

Now, Dreamwidth Studios—a blogging platform meant for artists (and one of the parties represented by tech trade group NetChoice in a challenge to the Mississippi law)—is also blocking access for people in Mississippi, as well as preventing minors in Tennessee from opening new accounts.

“People whose IP addresses geolocate to Mississippi will only be able to access a page that explains the issue and lets them know that we’ll be back to offer them service as soon as the legal risk to us is less existential,” Dreamwidth says on its website.

The company announced its new Mississippi policy on August 26, saying, “Mississippi residents, we are so, so sorry. We really don’t want to do this.” But “the Mississippi law is a breathtaking state overreach: it forces us to verify the identity and age of every person who accesses Dreamwidth from the state of Mississippi and determine who’s under the age of 18 by collecting identity documents, to save that highly personal and sensitive information, and then to obtain a permission slip from those users’ parents to allow them to finish creating an account.””

Dreamwidth goes on:

[The Mississippi law] also forces us to change our moderation policies and stop anyone under 18 from accessing a wide variety of legal and beneficial speech because the state of Mississippi doesn’t like it — which, given the way Dreamwidth works, would mean blocking people from talking about those things at all. (And if you think you know exactly what kind of content the state of Mississippi doesn’t like, you’re absolutely right.)

Needless to say, we don’t want to do that, either. Even if we wanted to, though, we can’t: the resources it would take for us to build the systems that would let us do it are well beyond our capacity.”

Mississippi users of Dreamwidth aren’t the only ones with restricted access. The platform will also “prevent any new account signups from anyone under 18 in Tennessee to protect ourselves against risk,” it said. “The judge in our challenge to Tennessee’s social media age verification, parental consent, and parental surveillance law (which we are also part of the fight against!) ruled last month that we had not met the threshold for a temporary injunction preventing the state from enforcing the law while the court case proceeds,” Dreamwidth posted. “The Tennessee law is less onerous than the Mississippi law and the fines for violating it are slightly less ruinous (slightly), but it’s still a risk to us.”

Dreamwidth’s moves further highlight how age verification laws like the ones enacted by Mississippi and Tennessee will come down harder on small and niche platforms than on big tech companies.

Keep reading

Comprehensive data privacy laws go into effect in 8 more states this year

This year, comprehensive privacy laws are going into effect in eight states to regulate how businesses handle digital information and to give consumers more protections over their personal data.

The laws in DelawareIowaMinnesotaNebraskaNew HampshireNew Jersey and Tennessee have taken effect already this year, according to a database from the International Association of Privacy Professionals’ Westin Research Center. Maryland’s privacy law, signed by Democratic Gov. Wes Moore last year, will go into effect Oct. 1.

Privacy laws enacted in IndianaKentucky and Rhode Island will go into effect next year.

Several other states are considering comprehensive privacy bills during this year’s legislative sessions. They include MassachusettsMichiganNorth CarolinaPennsylvania and Wisconsin.

When a person visits a website, applies to a job or logs into an online portal, they may be sharing their personal information. Comprehensive privacy laws can apply to a wide range of companies that participate in this kind of data collection.

These laws generally include two types of provisions — those related to consumer rights and those that establish business obligations, according to the association.

Under each of the new laws, consumers have the right to control when and how their data is collected and shared. Some of those provisions include the right to delete data from a company’s database, the ability to opt out of sharing sensitive information and the right to prohibit a company from selling their data.

The new measures also require businesses to ask consumers if they want to opt in to data collection. In some states, businesses are required to complete consumer data risk assessments and identify ways in which discrimination could take place. Some companies also may be required to limit to how consumer data is processed.

Keep reading

Michigan’s Dirty SOS Jocelyn Benson Gives Green Light To ONLINE VOTING With NO Legislative Approval

On September 22, 2024, we warned of a shocking plan by Democrats to steal the upcoming general election.

In the 2000 presidential election, a mere 629 votes separated the race between George W. Bush and Al Gore in Florida. When the absentee overseas votes finally arrived, George W. Bush was able to take the 537-vote lead from Al Gore and win the presidency.

On August 12, 2024, the DNC released a memo announcing that it would spend six figures to collect up to 9 million Democratic votes from overseas.

The DNC’s stated goal of winning the votes of approximately 9 million Americans through its Democrats Abroad website seems impossible, given that, according to a recent report by the federal government, only 4.4 million US citizens reside overseas, and only 2.8 million of those are of voting age.

Watch this stunning montage of Democrats working with their allies in the media to push for millions of NON-EXISTENT overseas “voters” to miraculously cast millions of votes in the 2024 election.

Despite the fact that military members only make up a small fraction of all UOCAVA voters, Democrats and the media continue to identify all UOCAVA voters as “military voters” as a way to gaslight the general public into believing that Republicans who want to secure the UOCAVA voting system are trying to disenfranchise military voters.

Keep reading

Australia’s Senate Orders Release of eSafety Censorship Emails

The Australian Senate has formally ordered the production of all communications between “eSafety” Commissioner Julie Inman Grant and the Global Alliance for Responsible Media (GARM), adding to the scrutiny over the Commissioner’s role in transnational efforts to stifle online political speech.

While the contents of the emails had already come to light through a US House Judiciary Committee investigation, the Senate’s move signals a significant shift, one aimed squarely at holding a senior Australian bureaucrat accountable for her coordination with a foreign activist group pushing to censor views, including those of US President Donald Trump.

Senator Alex Antic, who introduced the motion, confirmed its passage on Wednesday afternoon, posting: “The Senate has voted in favour of my order for production of documents relating to communications between the Office of the eSafety Commissioner and the Global Alliance for Responsible Media.”

Keep reading

AI browsers could leave users penniless: A prompt injection warning

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.”

Large language models (LLMs)—like the ones that power AI chatbots including ChatGPT, Claude, and Gemini—are designed to follow “prompts,” which are the instructions and questions that people provide when looking up info or getting help with a topic. In a chatbot, the questions you ask the AI are the “prompts.” But AI models aren’t great at telling apart the types of commands that are meant for their eyes only (for example, hidden background rules that come directly from developers, like “don’t write ransomware“) from the types of requests that come from users.

To showcase the risks here, the web browser developer Brave—which has its own AI assistant called Leo—recently tested whether it could trick an AI browser into reading dangerous prompts that harm users. And what the company found caused alarm, as they wrote in a blog this week:

“As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?”

Prompt injection, then, is basically a trick where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.

What sets prompt injection apart from old-school hacking is that the weapon here is language, not code. Attackers don’t need to break into servers or look for traditional software bugs, they just need to be clever with words.

For an AI browser, part of the input is the content of the sites it visits. So, it’s possible to hide indirect prompt injections inside web pages by embedding malicious instructions in content that appears harmless or invisible to human users but is processed by AI browsers as part of their command context.

Now we need to define the difference between an AI browser and an agentic browser. An AI browser is any browser that uses artificial intelligence to assist users. This might mean answering questions, summarizing articles, making recommendations, or helping with searches. These tools support the user but usually need some manual guidance and still rely on the user to approve or complete tasks.

But, more recently, we are seeing the rise of agentic browsers, which are a new type of web browser powered by artificial intelligence, designed to do much more than just display websites. These browsers are designed to actually take over entire workflows, executing complex multi-step tasks with little or no user intervention, meaning they can actually use and interact with sites to carry out tasks for the user, almost like having an online assistant. Instead of waiting for clicks and manual instructions, agentic browsers can navigate web pages, fill out forms, make purchases, or book appointments on their own, based on what the user wants to accomplish.

Keep reading

Should the Government Restrict ‘Harmful’ Speech Online?

The First Amendment prohibits the federal government from suppressing speech, including speech it deems “harmful,” yet lawmakers keep trying to regulate online discourse.

Over the summer, the Senate passed the Kids Online Safety Act (KOSA), a bill to allegedly protect children from the adverse effects of social media. Senate Majority Leader Chuck Schumer took procedural steps to end the debate and quickly advance the bill to a floor vote. According to Schumer, the situation was urgent. In his remarks, he focused on the stories of children who were targets of bullying and predatory conduct on social media. To address these safety issues, the proposed legislation would place liability on online platforms, requiring them to take “reasonable” measures to prevent and mitigate harm.

It’s now up to the House to push the bill forward to the President’s desk. After initial concerns about censorship, the House Committee on Energy and Commerce advanced the bill in September, paving the way for a final floor vote.

KOSA highlights an ongoing tension between free speech and current efforts to make social media “safer.” In its persistent attempts to remedy social harm, the government shrinks what is permissible to say online and assumes a role that the First Amendment specifically guards against.

At its core, the First Amendment is designed to protect freedom of speech from government intrusion. Congress is not responsible for determining what speech is permissible or what information the public has the right to access. Courts have long held that all speech is protected unless it falls within certain categories. Prohibitions against harmful speech—where “harmful” is determined solely by lawmakers—are not consistent with the First Amendment.

But bills like KOSA add layers of complexity. First, the government is not simply punishing ideological opponents or those with unfavorable viewpoints, which would clearly violate the First Amendment. When viewed in its best light, KOSA is equally about protecting children and their health. New York had similar public health and safety justifications for its controversial hate speech law, which was blocked by a district court and is pending appeal. Under this argument, which is often cited to rationalize speech limitations, the dangers to society are so great that the government should take action to protect vulnerable groups from harm. However, the courts have generally ruled that this is not sufficient justification to limit protected speech.

In American Booksellers Association v. Hudnut (1986), Judge Frank Easterbrook evaluated the constitutionality of a pornography prohibition enacted by the City of Indianapolis. The city reasoned that pornography has a detrimental impact on society because it influences attitudes and leads to discrimination and violence against women. As Judge Easterbrook wrote in his now-famous opinion, just because speech has a role in social conditioning or contributes loosely to social harm does not give the government license to control it. Such content is still protected, however harmful or insidious, and any answer to the contrary would allow the government to become the “great censor and director of which thoughts are good for us.”

In addition to the protecting children argument, a second layer of complexity is that KOSA enables censorship through roundabout means. The government accomplishes what it is barred from doing under the First Amendment by requiring online platforms to police a vast array of harms or risk legal consequences. This is a common feature of recent social media bills, which place the responsibility on platforms.

Practically, the result is inevitably less speech. Under KOSA, the platform has a “duty of care” to mitigate youth anxiety, depression, eating disorders, and addiction-like behaviors. While this provision focuses on the covered entity’s design and operation, it necessarily implicates speech since social media platforms are built around user-generated posts, from content curation to notifications. Because platforms are liable for falling short of the “duty of care,” this requirement is bound to sweep up millions of posts that are protected speech, even ordinary content that may trigger the enumerated harm. While the platform would technically be the entity implementing these policies, the government would be driving content removal.

Keep reading

The Loneliness Epidemic Isn’t About Phones, It’s About Algorithms

America’s loneliness epidemic has been headline news for years. We’ve seen study after study confirming what many feel in their bones: more people are isolated, disconnected, and struggling to find meaning in daily life.

Older Americans often chalk this up to technology or to the social scars of COVID. They aren’t entirely wrong, but the deeper story is much larger.

The real driver of this new loneliness is algorithms—the invisible rules and processes that now govern how we live, connect, and even think.

This may sound abstract, but it isn’t. Algorithms are the silent presence shaping your news feed, recommending your next purchase, deciding which job application gets reviewed, and filtering which posts you see from family or friends. They don’t just show you the world; they decide which world you see.

And the most important thing to understand is that algorithms have not touched every generation equally.

Baby boomers and many Gen Xers remember life before algorithms. They grew up with solitude as a normal part of existence: long walks, time alone with books, evenings without distraction. Their social lives were local and embodied. If they were lonely, it was the ordinary kind of loneliness, the kind that might drive someone to call a friend, join a club, or just take a walk and kick around some stones along the way.

Millennials came of age as algorithms entered their lives through the rise of social media and smartphones. For them, the shift was gradual. They still remember analog childhoods, but their adult lives became increasingly tethered to devices. They learned to straddle both worlds, sometimes nostalgically recalling life before algorithms, but never recognizing algorithms as the new driving force in their lives.

Gen Z and Gen Alpha, however, have never known life without algorithmic curation. From childhood, their identities, friendships, and even their sense of self have been shaped inside systems designed to maximize engagement.

They are the most connected generation in history and yet, paradoxically, the loneliest. Studies confirm that they report higher levels of isolation and depression than their parents or grandparents did at the same age. For them, solitude is almost unimaginable. Their sleeping hours have diminished, and their waking hours have been saturated with algorithmic nudges, performance demands, and invisible comparisons.

This is why blaming “phones” or “tech” misses the point. A phone is just a tool. The deeper cause of today’s epidemic of loneliness is the system of algorithms that runs on those devices and quietly governs the lives lived through them.

Keep reading

4chan and Kiwi Farms Sue UK Regulator Ofcom Over Online Censorship Law, Citing First Amendment Violations

Two of the internet’s most free-speech supporting platforms, 4chan and Kiwi Farms, are taking their fight for online free speech to court, targeting the UK’s communications regulator, Ofcom, for what they describe as an unconstitutional attempt to enforce British censorship laws on American websites.

In a lawsuit filed in the US District Court for the District of Columbia, the plaintiffs argue that the UK’s controversial Online Safety Act is not only an unlawful extraterritorial power grab but a direct attack on foundational American liberties.

Read the complaint here.

The suit calls Ofcom’s enforcement tactics a clear violation of the First Amendment and a dangerous attempt to establish global jurisdiction over online speech.

The complaint lays out how the UK’s censorship regime is being pushed onto American soil, despite the fact that both platforms operate entirely within the United States and are in full compliance with US law.

“Parliament does not have that authority. That issue was settled, decisively, 243 years ago in a war that the UK’s armies lost and are not in any position to relitigate,” Kiwi Farms stated bluntly in a letter responding to Ofcom’s demands.

Ofcom, under the new Online Safety Act, is demanding that platforms like 4chan and Kiwi Farms conduct written “risk assessments,” install content moderation systems, remove speech deemed “illegal” by UK standards, and verify the identities of their users.

The platforms face criminal penalties and steep fines of up to £18 million ($24M) or 10% of their global revenue if they refuse.

The plaintiffs argue these demands are not only legally unenforceable but blatantly unconstitutional. “Where Americans are concerned, the Online Safety Act purports to legislate the Constitution out of existence,” the lawsuit states.

Central to the challenge is the claim that Ofcom, a British corporate regulator funded by the very companies it polices, is attempting to impose UK-style speech control on a global scale.

According to the complaint, Ofcom has no lawful authority to regulate US platforms, let alone to compel speech or force the removal of content that is protected under the US Constitution.

The filing asserts that Ofcom’s threats of imprisonment and massive fines, coupled with demands for speech censorship and compelled disclosure of sensitive company information, constitute “egregious violations of Americans’ civil rights.”

The UK regulator has already targeted both platforms with a series of legal notices and threats, despite lacking jurisdiction or proper legal process.

These include multiple emails and letters declaring 4chan and Kiwi Farms in breach of UK law, none of which were served under the required UK-US Mutual Legal Assistance Treaty.

The plaintiffs argue that these attempts at enforcement are not just improper, but “repugnant to United States public policy.”

“Ofcom purports to regulate content and interactions on platforms and services with which Plaintiffs’ users are voluntarily interacting,” the complaint says. “Ofcom seeks to control those interactions in order to satisfy the whims of Ofcom employees or the UK law enforcement or political apparatuses.”

Notably, both platforms have limited or no access for UK users in response to the threats. Kiwi Farms, for instance, blocked UK IPs entirely after receiving what it interpreted as an impending Section 100 order demanding compliance.

The lawsuit requests the court to block Ofcom from issuing further demands without going through proper international legal channels and to declare the Online Safety Act’s enforcement efforts unenforceable in the United States.

It also seeks a permanent injunction against any future attempts by Ofcom to impose UK regulations on the plaintiffs.

The case stands as a direct confrontation between two visions of the internet: one based on the US constitutional tradition of free speech and open access, and another that embraces government-mandated safety regimes that can be weaponized to silence speech on a global scale.

For the plaintiffs, the message is clear: they will not yield to foreign censors. As the suit puts it, “Delaware and West Virginia are not part of the UK. Their citizens, both natural and corporate, do not answer to the UK.”

Preston Byrne of Byrne & Storm, P.C., who represents the plaintiffs, told Reclaim The Net the platforms are refusing to comply with Ofcom’s demands because “American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail.”

He praised the decision by 4chan and Kiwi Farms to stand firm against the foreign regulator, stating, “In the face of these foreign demands, our clients have bravely chosen to assert their constitutional rights.”

Byrne characterized the UK’s censorship law as a calculated attack on the American tech sector, warning that “the UK Online Safety Act is a brazen attempt by a foreign country to hobble American competitiveness and suffocate American freedom by exporting the UK’s censorship laws to our shores.”

He made it clear that the legal team would not allow such interference to go unanswered: “The First Amendment bar is prepared to hale any foreign censor into federal court at any time to defend any American.”

In a statement to Reclaim The Net, Ronald Coleman of the Coleman Law Firm, P.C., co-counsel in the suit, framed the case as a broader defense of national sovereignty and individual liberty.

Keep reading