①U.S. government agencies have begun gradually phasing out Anthropic's Claude and switching to models from its competitors, such as OpenAI and Google; ②For Anthropic, this series of actions constitutes a significant blow. The public rejection by the Trump administration is very rare and may lead to Anthropic being viewed as an 'outcast' within the U.S. government system.
Cailian Press, March 5 (edited by Xia Junxiong) Following the breakdown of cooperation between AI company Anthropic and the Trump administration, U.S. government agencies have started to gradually phase out Anthropic’s Claude and switch to models from competitors like OpenAI and Google.
Last Friday, the U.S. Department of Defense announced that after a dispute over the terms of use of Anthropic’s AI model, the Pentagon decided to classify the company as a supply chain risk.
Anthropic subsequently responded on the same day, stating that the Pentagon’s decision 'lacks legal basis,' and announced its intention to file a lawsuit.
This week, under instructions from President Trump, leadership at the U.S. Department of State, the Treasury Department, and the Department of Health and Human Services (HHS) requested employees to stop using Anthropic’s large model, Claude.
According to media reports, the U.S. Department of Health and Human Services has asked employees to switch to other AI platforms, such as OpenAI’s ChatGPT and Google’s Gemini.
The U.S. Department of State also stated that it would replace Anthropic with OpenAI for the model used in its internal chatbot, StateChat.
According to a disclosed internal memo: “Currently, StateChat will use OpenAI’s GPT-4.1 model.” The memo also mentioned that more information would be released subsequently.
For Anthropic, this series of actions constitutes a significant blow. The public rejection by the Trump administration is very rare and may lead to Anthropic being viewed as an 'outcast' within the U.S. government system. Previously, the U.S. typically reserved such treatment only for suppliers from adversarial nations.
According to informed sources, the divergence between the Trump administration and Anthropic primarily focuses on security restrictions — specifically, how to prevent the U.S. military and intelligence agencies from using its AI technology for autonomous weapons targeting and conducting surveillance activities domestically.
As Anthropic faces resistance from the U.S. government, its competitors are seizing the opportunity to advance.
Late last Friday, OpenAI announced an agreement with the U.S. government to deploy its technology within the Department of Defense’s classified networks.
However, OpenAI's decision has sparked significant controversy. Following the announcement, the uninstallation rate of the ChatGPT mobile application surged in the United States. In contrast, Anthropic gained broader public support, with Claude’s downloads skyrocketing and reaching the top spot on the U.S. AppStore.
After observing the backlash, OpenAI CEO Sam Altman posted on X on Monday, stating that the company would 'revise' its agreement with the Department of Defense to clarify that its AI systems would not be 'intentionally used for domestic surveillance of U.S. citizens and nationals.'
Anthropic subsequently responded on the same day, stating that the Pentagon’s decision 'lacks legal basis,' and announced its intention to file a lawsuit.