Anthropic, the AI firm known for its Claude chatbot and commitment to safe technology, is seen to be adjusting its safety priorities to stay competitive. The company recently announced changes to its responsible-scaling policy, a set of internal rules aimed at preventing the creation of potentially harmful AI that could lead to large cyberattacks.
According to the updated guidelines, Anthropic will still demand a “strong argument that catastrophic risk is contained” during AI development. However, they have now added a clause stating that development will only pause “until and unless we no longer believe we have a significant lead.” This means that the company will continue development if it perceives it is ahead of competitors.
Anthropic justified this move by highlighting that concerns about AI safety in the U.S. have taken a back seat to economic considerations. The company expressed disappointment in the slow progress of government action on AI safety, noting a shift in policy focus towards AI competitiveness and economic growth rather than safety concerns.
This alteration in safety protocols coincides with the Pentagon’s ultimatum to Anthropic to expand the permissible military applications of its technology or risk losing government contracts. Nevertheless, Anthropic maintains that this guideline shift is independent of the Pentagon issue.
Founded in 2021 by former OpenAI employees, Anthropic has emphasized its commitment to safety. CEO Dario Amodei has stressed the importance of safety in AI development, despite acknowledging the potential negative impacts of AI if not handled responsibly.
The company’s recent policy update includes plans for enhanced transparency and accountability, with commitments to regularly publish safety reports and goals. However, critics like Heidy Khlaaf from the AI Now Institute argue that Anthropic has historically prioritized hypothetical catastrophic risks over addressing current AI-related harm, such as misuse of the Claude chatbot in fraudulent activities.
As the AI industry witnesses fierce competition among top players like Anthropic, OpenAI, and Google, the pressure to prioritize economic interests over safety concerns intensifies. Amidst this landscape, the U.S. administration’s pro-AI stance poses challenges for companies striving to balance innovation with safety practices.
In Canada, the absence of comprehensive AI regulations raises concerns about potential setbacks in AI development compared to the U.S. and the risk of companies relocating to less regulated environments. Since the failure of Canada’s Artificial Intelligence and Data Act in 2025, both Canadian and American governments have refrained from imposing broad AI regulations.
Despite external pressures, Anthropic insists that its safety policy revision is not linked to the Pentagon dispute. The company remains firm in its stance against the use of its technology in autonomous weapons and mass surveillance systems. As the deadline looms, Anthropic stands by its principles, prepared to explore alternative solutions if necessary.
