share_log

The 'Ice and Fire' Duality in the AI Sector: Anthropic Just Got Banned, While OpenAI Secures a Deal with the Pentagon

cls.cn ·  Feb 28 16:03

①On Friday, OpenAI reached an agreement with the U.S. Department of Defense regarding the use of its AI models; ②On the same day, U.S. President Trump placed OpenAI's competitor Anthropic on a "blacklist," ordering federal agencies to halt all business dealings with it; ③Strangely, both OpenAI and Anthropic have identical "red line" rules concerning AI safety restrictions.

OpenAI CEO Sam Altman stated on Friday evening (February 27) that the company had reached an agreement with the U.S. Department of Defense regarding the use of its artificial intelligence models. Earlier that same day, U.S. President Trump had placed OpenAI's competitor Anthropic on a "blacklist" and ordered federal agencies and military contractors to cease all business activities with Anthropic.

On Friday, Altman posted on the social media platform X, stating, "Tonight, we reached an agreement with the Department of Defense to deploy our models within their classified networks. Throughout all our communications, the Department of Defense has demonstrated a strong emphasis on security and a keen desire to collaborate with us for optimal outcomes."

The artificial intelligence industry has just experienced a dramatic week, primarily centered around a week-long intense confrontation between Anthropic and the U.S. Department of Defense. Altman’s post can be seen as the conclusion to this dramatic event.

The Department of Defense has chosen OpenAI.

Anthropic was the first company to deploy its models within the Department of Defense's classified networks. Prior to this, the company had been negotiating subsequent contract terms with the agency, but the talks ultimately collapsed.

Anthropic sought assurances from the Department of Defense that its models would not be used in fully autonomous weapons systems or for mass surveillance of the American public, while the Department of Defense wanted Anthropic to agree to allow the military to use these models for all legal purposes.

Earlier on Friday, U.S. Defense Secretary Peter Hegseth labeled Anthropic as posing a "supply chain risk to national security" after unsuccessful negotiations with the company. Notably, this designation is typically reserved for foreign competitors, and this determination will compel suppliers and contractors of the Department of Defense to stop using Anthropic’s models.

President Trump also instructed all federal agencies within the United States to "immediately cease" using Anthropic's technology.

However, strangely enough, in a memo dated Thursday (February 26), Altman informed employees that OpenAI shares the same "red line" rules as Anthropic. In his Friday post, he also mentioned that the Department of Defense had agreed to comply with these restrictions.

Altman wrote, "Our two most important safety principles are the prohibition of large-scale domestic surveillance and the requirement that humans must be responsible for the use of force."

It is not yet clear why the U.S. Department of Defense chose to cooperate with OpenAI rather than accept Anthropic, but for several months prior, U.S. government officials had been criticizing Anthropic, stating that it appeared to place excessive emphasis on the safety of artificial intelligence.

Altman also wrote that OpenAI would establish 'technical safeguards to ensure its models function properly,' and the company would also deploy personnel 'to assist in the operation of our models and ensure their security.'

Editor/Doris

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment