Good news in the battle between the federal government and the AI company Anthropic: A federal judge has temporarily blocked the Department of Defense from declaring Anthropic a “supply chain risk,” which would have barred any federal agency or contractor from doing business with the company.
The government’s “conduct appears to be driven not by a desire to maintain operational control when using AI in the military but by a desire to make an example of Anthropic for its public stance on the weighty issues at stake in the contracting dispute,” wrote U.S. District Judge Rita Lin in an order granting Anthropic’s motion for preliminary injunction.
“Weighty issues” might undersell it. The supply chain risk designation—usually reserved for foreign companies—and President Donald Trump’s declaration that all federal agencies must “IMMEDIATELY CEASE all use of Anthropic’s technology” came after Anthropic refused to remove contract language preventing the Pentagon from using its AI system, Claude, for autonomous weapons or mass domestic surveillance.
Rather than simply discontinue Anthropic’s contract, the Trump administration threw a massive public tantrum over not being able to use Claude for killer robots or new frontiers in the surveillance state. (Not that it wanted to do these things, the Pentagon insisted. It just needed these restrictions removed because…reasons.)
Anthropic sued, alleging a violation of its First Amendment rights.
In a March 26 order, Lin issued a preliminary injunction order that prohibits the federal government “from implementing, applying, or enforcing in any manner” the president’s directive and “any and all other agency actions taken in response to the Presidential Directive.” Lin further blocked the Department of Defense and Defense Secretary Pete Hegseth from designating Anthropic a supply chain risk.
“It is the Department of War’s prerogative to decide what AI product it uses,” notes Lin in the order.
Everyone, including Anthropic, agrees that the Department of War may permissibly stop using Claude and look for a new AI vendor who will allow ‘all lawful uses’ of its technology. That is not what this case is about.
The question here is whether the government violated the law when it went further.
For now, Lin has concluded that there is strong evidence that it did. “This appears to be classic First Amendment retaliation,” she wrote.