share_log

AMD发布英伟达竞品AI芯片 预期市场规模四年到5000亿美元

AMD releases nvidia competitive ai chip, expected market size to reach 500 billion US dollars in four years

cls.cn ·  05:01

①The newly launched MI325X, like the MI300X, is based on the CDNA 3 architecture and has been upgraded in terms of HBM3e memory capacity; ②AMD has announced that the upcoming MI355X next year will see a significant architectural upgrade and further increase in memory capacity; ③Su Zifeng once again boldly predicts that the entire computing chip market size will reach 500 billion US dollars by 2028.

On October 11, Financial Associated Press (Editor: Shi Zhengcheng) In the field of AI computing power, AMD, which has always lived in the shadow of NVIDIA, held an artificial intelligence-themed launch event on Thursday, unveiling a number of new products including the MI325X computing chip. However, while the market enthusiasm was tepid, AMD's stock price also experienced a significant decline.

m7B4yh5p87.jpg

As the most market-focused product, MI325X is based on the CDNA 3 architecture, just like the previously launched MI300X, with a similar basic design. Therefore, MI325X can be seen more as a mid-term upgrade, using 256GB of HBM3e memory, with a maximum memory bandwidth of up to 6TB/s. The company expects this chip to start production in the fourth quarter and will be supplied through cooperating server manufacturers in the first quarter of next year.

62wEKl8LSI.png

In AMD's positioning, the company's AI accelerator is more competitive in use cases for AI model creation or inference, rather than training models with vast amounts of data. Part of the reason is that AMD has stacked more high-bandwidth memory on the chip, enabling it to outperform some NVIDIA chips. For comparison, NVIDIA configures the latest B200 chip with 192GB of HBM3e memory, that is, each of the two B100s connects to 4 24GB memory chips, although the memory bandwidth can reach 8TB/s.

AMD's CEO, Su Zifeng, emphasized at the event: 'As you can see, when running Llama 3.1, the MI325 can provide up to 40% more performance than NVIDIA's H200.'

According to official documents, compared to the H200, the MI325, which has parameter advantages, can provide 1.3 times the peak theoretical FP16 (16-bit floating point) and FP8 computing performance.

Compared to the MI325X, AMD has also painted a big 'pie' for the market - the company will launch the MI350 series GPU with CDNA 4 architecture next year. In addition to the HBM3e memory size increasing to 288GB and the process technology improving to 3nm, the performance improvement is also very impressive. For example, the performance of FP16 and FP8 is 80% higher than the recently released MI325. The company even stated that compared to the accelerators of CDNA 3, the inference performance of the MI350 series will increase by 35 times.

3MhZuzX7gy.png

AMD expects platforms featuring the MI355X GPU to be launched in the second half of next year, facing off against Nvidia's BlackWell architecture products with the MI325.

r9fPnQ346v.png

Lisa Su also stated on Thursday that the market for data center artificial intelligence accelerators will grow to $500 billion by 2028, compared to $45 billion in 2023. In previous statements, she had expected this market to reach $400 billion by 2027.

It is worth mentioning that most industry analysts believe that Nvidia's market share in the AI chip market can exceed 90%, which is also the reason why the chip leader can enjoy a 75% gross margin. Based on the same considerations, there is a significant difference in the stock price performance of both companies - after today's conference, AMD (red line) has narrowed its year-to-date gain to less than 20%, while Nvidia's (green line) gain is close to 180%.

jny578Y5qO.png

(Year-to-date stock price performance of AMD and Nvidia, Source: TradingView)

Casually 'punching intel'.

For AMD, the majority of its datacenter business still comes from CPU sales. In practical use cases, GPUs also need to be paired with CPUs to be used together.

In the financial report for the June quarter, AMD's datacenter sales doubled year-on-year to reach 2.8 billion US dollars, but AI chips accounted for only 1 billion US dollars. The company stated that it holds about 34% of the datacenter CPU market share, which is lower than Intel's Xeon series chips.

As a challenger in the datacenter CPU field, AMD also released the fifth generation EPYC 'Turing' series server CPUs on Thursday, ranging from the 8-core 9015 ($527) to the highest 192-core 9965 ($14831). AMD emphasized that the EPYC 9965's performance surpasses Intel's flagship server CPU Xeon 8592+ multiple times.

xG6RBb01gD.png

(Su Zifeng showcases the 'Turing' series server CPU, source: AMD)

During the event, AMD invited Meta's Vice President of Infrastructure and Engineering, Kevin Salvadore, to come on stage, who revealed that the company has deployed over 1.5 million EPYC CPUs.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment