The Trump administration issued a directive on Friday to halt the use of Anthropic’s artificial intelligence technology across all U.S. agencies and imposed significant penalties, marking a highly publicized clash between the government and the company regarding AI safety concerns.
President Donald Trump, along with Defense Secretary Pete Hegseth and other officials, criticized Anthropic on social media for refusing to grant the military unrestricted access to its AI technology by a specified deadline. They accused the company of jeopardizing national security by CEO Dario Amodei’s reluctance to comply with the government’s demands over fears that their products could potentially violate safeguards.
Trump expressed his disapproval of Anthropic, stating, “We don’t need it, we don’t want it, and will not do business with them again!” Hegseth also labeled the company as a “supply chain risk,” a classification typically reserved for foreign adversaries that could disrupt critical partnerships with other businesses.
In response, Anthropic contested the supply chain risk designation, asserting it as an unprecedented action historically reserved for U.S. adversaries rather than American companies. The company emphasized that such a designation would be legally unsound and establish a dangerous precedent for any American company engaging with the government.
Anthropic had sought specific assurances from the Pentagon regarding the use of its AI chatbot Claude to prevent mass surveillance of Americans or deployment in fully autonomous weapons. While the Pentagon indicated it would only use the technology lawfully, it insisted on unrestricted access without limitations, leading to the impasse.
The government’s attempt to assert control over the company’s internal decisions reflects a broader dispute concerning AI’s role in national security and the potential risks posed by advanced machines in scenarios involving lethal force, sensitive data, and government surveillance.
Following publicized discussions this week, Anthropic rejected the government’s contract language, citing concerns that the new terms would permit safeguards to be disregarded. Despite the company’s financial ability to withstand contract termination, the government’s actions pose wider risks as Anthropic rapidly ascends from a research lab to a high-value startup.
President Trump’s decision was preceded by criticisms from top Pentagon and State Department officials on social media, highlighting contradictions in their complaints. Hegseth emphasized the necessity for the Pentagon to have unrestricted access to Anthropic’s models for lawful defense purposes.
The administration granted a six-month phase-out period for the military’s use of Anthropic’s technology embedded in military platforms. Additionally, Trump warned of potential civil and criminal consequences if the company did not comply during the transition period.
The move has garnered attention from industry stakeholders and competitors, with Elon Musk supporting the administration’s stance while expressing concerns about Anthropic’s actions. OpenAI’s CEO, Sam Altman, backed Anthropic and questioned the Pentagon’s approach, emphasizing shared safety principles.
Amodei, a former OpenAI employee, founded Anthropic in 2021, with Altman acknowledging his trust in Anthropic’s commitment to safety. Retired Air Force Gen. Jack Shanahan criticized the government’s actions, highlighting the widespread use of AI models like Claude and the need for caution in deploying such technology in national security contexts.
