Anthropic has filed a federal lawsuit against the Pentagon, contesting its designation of the company as a supply chain risk. The legal action, lodged in a California court, claims the move constitutes unconstitutional retaliation for the company’s stance on AI usage.1213
Origins of the Conflict
Last month, Anthropic CEO Dario Amodei declared that the company’s AI models, including its Claude chatbot, cannot support mass surveillance of Americans or direct autonomous weapons systems. Defense Secretary Pete Hegseth and President Donald Trump rebuked Amodei for restricting government applications of the technology.
The administration responded swiftly by classifying Anthropic as a supply chain risk, effective immediately—a sanction typically imposed on firms from adversarial nations. This rare step reverberated through Silicon Valley, prompting a coalition of tech organizations to issue a public letter denouncing the decision. Even OpenAI CEO Sam Altman criticized the label as an overreach.20
Apology and Legal Challenge
Amodei later apologized in a staff memo, emphasizing shared goals with the Department of Defense in bolstering U.S. national security through AI. He wrote, “Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government.”
Undeterred, Anthropic proceeded with the lawsuit. As Amodei stated in a blog post, “we do not believe this action is legally sound, and we see no choice but to challenge it in court.” The filing argues that government officials violated the Constitution by punishing protected speech. It states, “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.” Anthropic seeks judicial intervention to protect its rights and end what it calls an unlawful retaliation campaign.10
Expert Views and Ongoing Use
Legal experts predict a tough battle for Anthropic. Brett Johnson, a partner at Snell & Winter, noted that the government holds discretion over contract terms, limiting appeal prospects. He suggested focusing on claims of selective targeting among AI contractors.
Despite the designation, the Pentagon continues deploying Claude in U.S. operations against Iran, acknowledging use of what it deems compromised technology. Non-military agencies plan to comply with the directive and cease usage. A Microsoft spokesperson confirmed it will provide the chatbot to other government entities but not the Defense Department.24
The lawsuit highlights immediate harms, including lost contracts worth hundreds of millions and chilled speech on AI’s role in warfare and surveillance. It accuses the actions of bypassing Congress and being “as unlawful as they are unprecedented.”17

