share_log

革命性o1模型问世,OpenAI警示:生物武器风险也有所上升

The revolutionary O1 model has been unveiled, and OpenAI has warned that the risk of biological weapons has also increased.

cls.cn ·  21:24

OpenAI's o1 model has a "medium" risk in issues related to "chemical, biological, radiological, and nuclear" weapons- the highest risk rating ever given to its model by OpenAI; Experts say that AI software with more advanced capabilities (such as the ability to carry out step-by-step reasoning) is more likely to be abused by malicious actors.

The American artificial intelligence company OpenAI acknowledges that its latest o1 model has "significantly" increased the risk of being abused to manufacture biological weapons.

On Thursday local time (September 12), OpenAI released a new generation of o1 series models, claiming to be able to reason about complex tasks and solve problems harder than previous scientific, coding, and mathematical models. These advances are also considered a key breakthrough towards general artificial intelligence (AGI).

According to tests, the o1 model scored 83% in the International Mathematical Olympiad qualification exam, while the GPT-4o could only solve 13% of the problems correctly. In the programming ability competition Codeforces, the o1 model achieved a score of 89th percentile, while the GPT-4o only had 11%.

OpenAI states that, according to tests, in the next updated version, the large model in challenging benchmark tests in physics, chemistry, and biology is expected to perform similarly to doctoral students. However, the stronger the AI model's ability, the stronger its potential for destruction.

At the same time, OpenAI announced o1's system cards (tools for explaining how AI operates), and the report outlined details including external red team testing and preparedness frameworks, and introduced the measures taken in terms of safety and risk assessment.

The report mentioned that o1 has a "medium" (medium) risk in issues related to "chemical, biological, radiological, and nuclear" (CBRN) weapons- the highest risk rating ever given to its model by OpenAI.

OpenAI states that this means the technology has "significantly increased" the ability of experts to manufacture biological weapons. It is understood that the rating is divided into four levels: "low, medium, high, and severe". Models with a rating of "medium" or below can be deployed, and those with a rating of "high" or below can be further developed.

Experts say that AI software with advanced features, such as the ability to perform step-by-step reasoning, is more susceptible to abuse in the hands of malicious actors.

Yoshua Bengio, Turing Award winner and computer science professor at the University of Montreal, said that if OpenAI now poses a 'medium risk' in terms of chemical and biological weapons, it further strengthens the need and urgency for legislation.

Bengio mentioned the forthcoming 'SB 1047 Bill', the full name of which is the 'Frontier Artificial Intelligence Model Security Innovation Act'. The bill requires developers of large-scale frontier artificial intelligence models to take preventive measures, including reducing the risk of these models being used to develop biological weapons.

Bengio stated that as advanced AI models move towards AGI, without proper protection measures, the risks will continue to increase. The potential of AI reasoning capabilities to be used for deception is particularly dangerous.

Mira Murati, Chief Technology Officer of OpenAI, told the media that due to the advanced features of the o1 model, the company took a particularly 'cautious' approach when launching the model to the public.

She added that the model passed the red team testing and performed much better in overall security indicators compared to previous versions.

Editor/ping

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment