A United States judge has issued a temporary injunction against the Pentagon’s blacklisting of Anthropic, marking a significant development in the company’s ongoing dispute with the military regarding AI safety in combat situations. The lawsuit filed by Anthropic in a federal court in California contends that U.S. Secretary of War Pete Hegseth exceeded his authority by labeling Anthropic a national security supply-chain risk without providing the company an opportunity to challenge the designation, thus violating its rights under the First and Fifth Amendments.
The ruling by U.S. District Judge Rita Lin, appointed by former President Joe Biden, favored Anthropic, stating that the injunction would be enforced after a seven-day delay to allow the administration to potentially appeal the decision. This legal battle ensued after Hegseth restricted Anthropic from certain military contracts following the company’s objection to the military utilizing its AI chatbot, Claude, for surveillance or autonomous weapons, a move that could result in significant financial losses and damage to Anthropic’s reputation.
Anthropic argues that the reliability of AI models is insufficient for safe deployment in autonomous weaponry and opposes domestic surveillance as a violation of civil liberties. On the other hand, the Pentagon asserts that private entities should not dictate military operations, clarifying that they are only interested in lawful use of the technology. Judge Lin criticized the government’s actions, suggesting that Anthropic was being penalized for criticizing the government’s stance on contracting rather than for national security reasons.
Anthropic’s spokesperson, Danielle Cohen, expressed satisfaction with the court ruling and emphasized the company’s commitment to collaborating constructively with the government to ensure the safe and beneficial deployment of AI technologies. The designation of Anthropic as a supply-chain risk is unprecedented in the U.S., and the company’s legal challenges highlight the complexities surrounding the intersection of national security, technological innovation, and freedom of expression.
In a separate lawsuit pending in Washington, Anthropic challenges another Pentagon supply-chain risk designation that could impact its eligibility for civilian government contracts. The Justice Department contends that Anthropic’s resistance to lifting restrictions on Claude could introduce uncertainties in Pentagon operations, potentially jeopardizing military systems during critical missions. The government maintains that the designation was related to contractual disagreements, not Anthropic’s stance on AI safety.
