share_log

AI业又一里程碑!OpenAI及甲骨文将接受美政府模型发布前预测试

AI industry achieves another milestone! OpenAI and Oracle will undergo pre-testing by the US government before model release.

cls.cn ·  Aug 30 11:45

OpenAI and Oracle, the two leading AI companies, have announced that they will collaborate with the American AI Security Research Institute to conduct security testing before model release. The American AI Security Research Institute stated that this is a major milestone in managing the future of artificial intelligence, and it will participate in all major model security testing before and after release.

Financial Alliance News, August 30th (Editor Ma Lan) The American artificial intelligence industry is gradually accepting government guidance. On Thursday, OpenAI and Oracle announced that they will sign a cooperation agreement with the American AI Security Research Institute.

The institute is affiliated with the National Institute of Standards and Technology of the U.S. Department of Commerce. In a press release, it stated that it will obtain the right to use these models before and after their major public release.

OpenAI and Oracle have agreed to provide the AI Security Research Institute with testing rights for new models, and after passing the tests, the models will be released to the public.

OpenAI's CEO Altman expressed his delight in reaching an agreement with the AI Security Research Institute to conduct pre-release testing of the company's models.

Jason Kwon, the company's Chief Strategy Officer, also pointed out that the company believes the institute plays a crucial role in ensuring responsible development of artificial intelligence in the United States, and hopes to work with the institute to provide a management framework that can be used as a reference for other countries globally.

Regulatory Advancement

The American AI Security Research Institute was established in 2023, just a few days after Biden issued the first-ever executive order on artificial intelligence in the United States. It is also a key institution for the White House to assess, guide, and research artificial intelligence.

Elizabeth Kelly, director of the United States Artificial Intelligence Security Research Institute, said that the agreement with OpenAI and Oracle is just the beginning and represents an important milestone in the institute's management of future artificial intelligence.

And this Wednesday, the California Legislature, as a leader in U.S. technology, passed a controversial artificial intelligence security bill, which requires companies to conduct security testing on artificial intelligence models that reach a certain scale in terms of cost or computing power, and provide other safeguards.

The passage of this bill may force many technology companies to further surrender to regulation in exchange for maximum innovation freedom.

In addition, OpenAI has also been criticized for security issues. According to former employee Daniel Kokotajlo, the company has experienced a slow and steady wave of departures in the past few months, with half of the OpenAI employees who are concerned about long-term risks of artificial intelligence having left.

This may prove the widespread dissatisfaction within OpenAI due to the disregard for the security performance of new models internally. This has certainly attracted the attention of U.S. government agencies, and choosing to collaborate with the United States Artificial Intelligence Security Research Institute at this time may be one of the best ways for OpenAI to dispel external doubts.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment