In a February 27 post titled “You Should Have Moral Qualms about Anthropic’s Claims,” Hoover Institution senior fellow and foreign policy scholar Amy Zegart challenged the ethics of a company named Anthropic. What I found refreshing is that a defense contractor’s CEO had a strong enough belief in his ethics that he was willing to forego a lucrative contract. According to Zegart, I should have moral qualms about that. I don’t and I’ll say why.
Anthropic had told the Department of War that it did not want its products used for either autonomous weapons or mass surveillance of Americans. According to Zegart, the Pentagon stated that it did not contemplate such uses. But that wasn’t enough for Dario Amodei, the CEO of Anthropic, who stated that he could not “in good conscience” accept the War Department’s assurances. Here’s Brendan Bordelon in a February 26 news item in Politico:
[Secretary of War] Hegseth met with Anthropic CEO Dario Amodei on Tuesday to deliver a warning — give the military unfettered access to its Claude AI model by Friday evening or else have the government label it a “risk” to the supply chain. The designation, typically reserved for foreign firms with ties to U.S. adversaries, could ban companies that work with the government from partnering with Anthropic.
Hegseth threatened Anthropic with designating it as a risk to the supply chain. With that label, Anthropic could be forbidden, as noted above, from working with companies that work with the government. Hegseth also, though, threatened to invoke the Defense Production Act to compel Anthropic to work with the Defense Department. A risk to the supply chain and, at the same, a firm that Hegseth wants to use? Hmmm. Bordelon quotes Dean Ball, whom he identifies as a former AI advisor in the Trump administration, noting the obvious contradiction. Said Ball, “You’re telling everyone else who supplies to the DOD you cannot use Anthropic’s models, while also saying that the DOD must use Anthropic’s models.”
Zegart cites the Politico article but doesn’t mention this contradiction. Instead, she goes after Anthropic and CEO Amodei. She writes:
There is a serious ethical question about whether one company, elected by nobody, with its own normative agenda as well as substantial global investors and customers, should be dictating the conditions of the most essential government role: protecting the lives of Americans.
But she misstates the issue. Anthropic isn’t trying to dictate the conditions of this essential government role. Anthropic is simply stating what its own limits are. If the Pentagon can find another supplier, it is free to do so and, indeed, has already done so. OpenAI has stepped up to take Anthropic’s place.
Moreover, why does Zegart think it’s important that Anthropic is elected by nobody? Does Zegart really think that companies that contemplate working with the Department of War should be elected by somebody.
You must be logged in to post a comment.