Altman stated that DeepSeek's R1 is an impressive model, especially considering its cost-performance ratio. Clearly, OpenAI will release better models, and the emergence of new competitors is truly exciting! We will launch some new products. The name of DeepSeek resonates worldwide, and China is leading the way in open-source large models.
While we enjoy the festive atmosphere of the New Year, OpenAI co-founder and CEO Sam Altman also presented a great gift! He publicly praised the latest model - R1, released by China's large model platform DeepSeek, which is quite rare.
Altman stated that DeepSeek's R1 is an impressive model, especially considering its cost-performance ratio. Clearly, OpenAI will release better models, and having new competitors is indeed exciting! We will launch some new products.
Currently, this tweet has over 7.6 million views and over 5,000 comments, resonating greatly; the name DeepSeek echoes around the world, as China leads in open-source large models.
Since the open-sourcing of ChatGPT-like models, Altman has rarely commented on such models at all, even for Meta's open-sourced Lama series, Google's open-sourced Gimi series, Mistral AI's open-sourced Mistral series, and Alibaba's open-sourced Qwen series; he would not say much.
It wasn't until earlier this month that DeepSeek open-sourced R1, which performs comparably to OpenAI's o1, completely igniting both domestic and international interest, making it a phenomenal model that broke the mold.
In response, Altman directly announced that the free ChatGPT will provide the o3-mini model, and most people believe this is DeepSeek's contribution. Now Altman publicly praises DeepSeek, fully proving that China's large models have the strength to compete with world-class developers like OpenAI.
After Altman's praise, some netizens panicked, quickly proposing the release of the o series models! We cannot let China take the lead in the competition for Artificial General Intelligence (AGI) / Superintelligent AI (ASI), please be sure to accelerate.
Some people appreciate Altman's broad-mindedness, saying that it is commendable to recognize DeepSeek on one hand while committing to launch more exciting products on the other; this is the correct approach.
![](https://postimg.futunn.com/news-editor-imgs/20250129/public/17381143270344434318260-17381143270347239965599.png?imageMogr2/quality/minsize/1/ignore-error/1/format/webp)
You already possess this technology, but are unwilling to use it to maximize profits from us. Now, with DeepSeek changing the rules of the game, you have to deliver early. Competition is beneficial, and no one should stop it.
![](https://postimg.futunn.com/news-editor-imgs/20250129/public/17381143270177071553128-17381143270167637435230.png?imageMogr2/quality/minsize/1/ignore-error/1/format/webp)
Without localized streamlined models, your products appear irrelevant. OpenAI was supposed to be "open AI," but in reality, it is China's competitors who are doing open source.
Your attempt to lock this technology behind a paywall for massive profits has already failed. The future of AI is local running, unrestricted reasoning models, ensuring access won't be cut off by so-called woke corrupt companies.
![](https://postimg.futunn.com/news-editor-imgs/20250129/public/17381143270342756834507-17381143270335020475273.png?imageMogr2/quality/minsize/1/ignore-error/1/format/webp)
You don't understand. DeepSeek has defeated you in the areas where you excel. When was the last time OpenAI made significant contributions to open source?
You couldn't even properly release GPT-2, nor did you showcase the thinking markers of the O1 model. They released state-of-the-art technology, while you have released nothing.
![](https://postimg.futunn.com/news-editor-imgs/20250129/public/17381143270923293572270-17381143270919999039321.png?imageMogr2/quality/minsize/1/ignore-error/1/format/webp)
DeepSeek is stronger than ChatGPT; there is no comparison at all. China is winning the AI race. I use it all day long, and it feels terrifying.
![](https://postimg.futunn.com/news-editor-imgs/20250129/public/17381143270186074551216-17381143270184811234787.png?imageMogr2/quality/minsize/1/ignore-error/1/format/webp)
Recently, DeepSeek heavily open-sourced the R1 model. In the USA AIME 2024, MATH-500, and SWE-bench Verified tests, it surpassed OpenAI's O1 model.
In the code testing Codeforces, R1 is only 0.3 points lower than the o1 model; MMLU is 1 point lower; GPQA is 4.2 points lower, with overall performance comparable to the o1 model.
However, in terms of price, R1 is 90% cheaper than o1 at $0.14 for every 1 million tokens of input; for output, o1 charges $60 for every 1 million tokens, while R1 only charges $2.19, a reduction of about 27 times.
Editor/danial
Comment(2)
Reason For Report