share_log

AMD(AMD.US)全力冲刺AI数据中心与AIPC! 重磅推出MI325X与Ryzen AI 300系列

AMD (AMD.US) is making a full effort to pursue AI data centers and AIPC! They have released the heavyweight MI325X and Ryzen AI 300 series.

Zhitong Finance ·  Jun 3 16:16

AMD (AMD.US) is accelerating the launch of a new server AI chip as well as an edge AI chip for AI PCs, attempting to weaken Nvidia's dominant position in the lucrative server AI chip market, which holds up to a 90% share. At the same time, AMD desires to gain a first-mover advantage over its chip peers in the edge AI market.

According to the Wisdom Financial App, the AI chip overlord Nvidia (NVDA. US)'s strongest competitor in the PC independent graphics card and AI chip field, AMD (AMD.US), is accelerating the launch of a new server AI chip and an edge-side AI chip for AI PC, hoping to weaken Nvidia's absolute dominance in the lucrative server AI chip market, which holds up to a 90% share, and gain a first-mover advantage over its chip peers in the edge-side AI market.

It is understood that AMD's M300X AI chip upgrade version, which is applied to AI data center servers, the MI325X will go on sale from the fourth quarter. AMD CEO, Lisa Su, stated in the opening keynote speech at the Computex conference in Taiwan that the successor to this MI300X will have a larger memory and faster data throughput.

In addition, AMD's more advanced MI350 series will be launched in 2025, while the MI400 series will be launched a year after that. AMD's release cycle of about once a year coincides with Nvidia CEO Jensen Huang's plan to release one new AI chip product per year announced in a speech in Taipei the night before.

In terms of performance indicators, the all-new MI325X from AMD, based on TSMC's 3nm manufacturing process, will continue the powerful CDNA3 architecture of AMD and adopt the fourth-generation HBM storage system — HBM3E, just like the Nvidia H200. The memory capacity will be significantly increased to 288GB, and the bandwidth will also be improved to 6TB/s, which will further enhance the overall performance. The basic benchmark specifications and compatibility in other aspects are basically the same as those of MI300X, which is convenient for AMD customers to upgrade and transit. Su Zhifeng pointed out that the MI325X AI performance improvement is the greatest in AMD's history, and will achieve more than 1.3 times improvement over its competitor, the Nvidia H200.

Compared with the current highly demanded H100 upgraded version H200 from Nvidia, below are the performance advantages of AMD MI325X:

Twice the memory of Nvidia's H200

1.3 times the memory bandwidth of H200

The peak theoretical FP16 is about 1.3 times that of H200

The peak theoretical FP8 is about 1.3 times that of H200

The model size per server is twice that of H200

Currently, a large amount of funds are pouring into the new data center AI training/inference system worldwide, and these massive funds are mainly spent on Nvidia's AI chip products. Competitors such as AMD, Intel, and some AI chip start-ups are releasing new products, trying to get a share of the cake in the mad tide of global enterprise AI deployment. Su Zhifeng indicated that the company's existing MI300 product still has a very strong demand, and is expected to have more advantages in terms of performance and power saving than its peers.

It is learned that the US cloud computing giant Microsoft has provided AMD's MI300X AI accelerator to customers on its Azure cloud service platform. Although AMD is the world's main GPU manufacturer, its development and expansion in the field of AI chips for data center servers has been consistently behind Nvidia.

However, as large cloud computing service providers begin to seek Nvidia's expensive and scarce AI chip substitutes, and AMD makes progress in AI chips by providing better software and hardware support, the AMD MI300X has now become a popular basic hardware in the AI field. Microsoft's Executive Vice President of Cloud Computing and Artificial Intelligence Scott Guthrie described AMD MI300X as the most cost-effective AI GPU in Azure OpenAI cloud service products.

There is an incredibly strong demand for AI chips from global enterprises! AMD aims to get a share of the cake by competing with Nvidia in the AI data center edge AI chip field (Nvidia may hold up to a 90% share).

Among the numerous chip companies competing with Nvidia in the AI data center edge AI chip field (Nvidia may hold up to a 90% share), the headquarters-based AMD in Santa Clara has made the largest positive progress, and Wall Street analysts expect AMD to gain at least a 10% share of the market in the next 1-2 years by cannibalizing a small portion of Nvidia's share.

AMD has increased its sales target for the so-called 'AMD AI accelerator' to $4 billion this year. Although this is a significant increase compared to the almost zero sales last year, it is still far behind Nvidia in comparison. According to Wall Street, the data center business unit of Nvidia (which is responsible for AI chips such as H100/H200) is expected to generate annual sales of over $100 billion, surpassing the total annual sales of AMD and Intel.

Due to the enormous future market size of AI chips, a variety of global startups of AI chips such as AMD, Intel, D-Matrix, and Cerebras Systems are actively competing for this big cake. Among these challengers, AMD may be the one that can continuously erode Nvidia's market share the most, with Citigroup predicting that AMD will soon capture about 10% of the market share.

"Nvidia hopes to have 100% market share, but customers from all over the world, of course, do not want Nvidia to have 100% market share," said Sid Sheth, co-founder of competitor D-Matrix. "This opportunity is too great. If any company monopolizes, it will be very unhealthy."

The trend of shifting from training AI models to so-called inference (or deployment of large AI models) mode means that the computing power requirements for data center AI chips are becoming more diversified. This may also provide chip companies with a significant opportunity to replace Nvidia's AI GPUs, especially when their purchase and operating costs are lower. Nvidia's flagship chips are priced at around $30,000 or higher, giving customers enough incentive to look for alternatives.

According to the latest prediction from Gartner, the AI chip market is expected to grow by 25.6% over the previous year to reach $67.1 billion by 2024. It is estimated that by 2027, the AI chip market size is expected to more than double that of 2023, reaching $119.4 billion.

AMD has a more optimistic expectation for the future AI chip market. At the 'Advancing AI' conference in 2023, AMD's strongest competitor Nvidia suddenly raised its global expected AI chip market size by 2027 from the previous expectation of $150 billion to $400 billion, while the expected AI chip market size for 2023 is only about $30 billion.

With the successive emergence of generative artificial intelligence applications such as ChatGPT and Google's Bard centered on consumer applications, more and more global technology companies are participating in the heat of laying out AI technology, which may drive a decade-long era of AI prosperity and development. According to the latest report released by Bloomberg industry research analyst Mandeep Singh, by 2032, the total revenue scale of the generative AI market, including AI chips required for training artificial intelligence systems and AI hardware and software applications, is expected to increase from last year's $40 billion to $1.3 trillion. This market is expected to grow rapidly at a compound annual growth rate of up to 42% and to increase 32 times in the 10 years.

The year of AI PC begins! AMD wants to dominate the edge AI chip field.

For consumers, AMD has announced its third-generation Ryzen AI processor and named it 'Strix Point', which is expected to be launched in July. They are customized for laptops embedded with large AI models, and integrate CPU+GPU+NPU for accelerating artificial intelligence tasks based on AMD Zen5 CPU architecture, combined with RDNA 3.5 GPU and XDNA 2 NPU architecture.

At the Computex conference, Su Zifeng invited a series of partners on stage, from HP CEO Enrique Lores to Asus Chairman Jonney Shih, to discuss the use of AMD's new Ryzen AI 300 series processors, which they will soon launch on AIPC.

'AIPC Year' is coming in 2024. Well-known PC brand manufacturers such as HP, Dell, Acer, Asus, MSI, and Gigabyte will all launch the first batch of AIPCs based on Intel or AMD processors in 2024. Qunzhi Consulting predicts that in 2024, as the first year of AI PC development, the shipment volume of AI laptops will reach 13 million units, and the penetration rate in the laptop market will reach 7%. By 2025, the penetration rate is expected to approach 30%, and by 2026, the penetration rate will exceed 50%, and by 2027, AIPC will become a category of mainstream PC products, and the market penetration rate will approach 80%.

According to Zacks Investment Research's research report, 2023 is a crucial year for the AI industry, with the unveiling of training-end data center GPU products from Nvidia and AMD, as well as various investments and strategic acquisitions. Looking ahead to 2024, Zacks said that after technology companies have chip as the basic hardware, they will continue to update large AI models and build AI applications, and further development of AI technology is expected to drive consumer hardware upgrades - such as turning to AI PCs and AI smartphones, as well as new AI software services such as endpoint AI models and embedded chatbots.

The efficient operation of endpoint AI large models and AI software is based on the core technology of AI inference, and the hardware basis of AI inference is centered on the CPU. The CPU's architecture determines that the CPU can not only perform general-purpose computing tasks, but also the scheduling characteristics of focusing on control flow and processing complex sequential computing tasks and logical decisions make the CPU shine in the field of AI inference.

In the field of AI inference, such as the application scenarios of large AI models on consumer electronics such as AI PCs, AI smartphones, and smartwatches, as well as running various AI software, the CPU focuses on complex logic decision-making and integrates NPU and GPU as auxiliary computing power support can realize streamlined large-scale AI models and efficient operation of multiple AI software on the end side. NPU (neural processing unit) is optimized for AI inference acceleration and can provide fast AI inference performance at lower power consumption, especially for processing neural network-related tasks. The GPU is extremely good at parallel computing and is suitable for executing a large number of matrix and vector operations. It can take heavy responsibility in processing data-intensive AI inference tasks, such as image and video analysis.

Research institution Counterpoint predicts that by 2027, AI PCs that can perfectly run advanced generative AI application software will account for three-quarters of the PC sales. Counterpoint said that although the compound annual growth rate (CAGR) of the entire laptop market from 2023 to 2027 is only 3%, the CAGR of embedded AI large models in the laptop market may reach 59%.

According to the latest forecast data from research institution Canalys, the global AI PC shipments will reach 51 million units in 2024, accounting for 19% of the total personal computer (PC) shipments. But this is just the beginning of the market transformation. By 2028, the AI PC shipments are expected to reach 208 million units, with an expected market share of 70% of total PC shipments. The compound annual growth rate (CAGR) from 2024 to 2028 is expected to be an amazing 42%.

It is reported that in terms of the performance comparison of AMD's new Ryzen AI 300 series processor, a slide displayed by AMD shows that the latest Ryzen system performs better than the AIPC end-core processor Snapdragon X Elite launched by Qualcomm Inc. In addition, the core chip of the new Copilot+ PC launched by Microsoft is equipped with Snapdragon X Elite.

In terms of the core NPU performance of AIPC, the all-new Ryzen AI 300 series processor surpasses both Intel and Qualcomm and becomes the strongest NPU today. The computing power of Qualcomm Snapdragon X Elite NPU is 45TOPS, while the next-generation Core Ultra Lunar Lake NPU computing power to be launched by Intel is also 45TOPS. The Ryzen AI 300 series is as high as 50TOPS. With more powerful NPU, and AMD's powerful CPU and GPU performance, higher-level parameterized edge AI large models can be deployed in multiple application scenarios.

It is reported that in addition to choosing Qualcomm as the core chip of AIPC, Microsoft also chooses to configure AMD chips in key points. Microsoft's Windows business executive Pavandeep Daulat recently joined the AMD team. He said that his team has been working with AMD since the first day and jointly developed the Copilot PC+ project.

Daulat said: "For us, running artificial intelligence on end devices means faster response time, better privacy, and cost. But this means running AI large models containing billions of parameters on PC hardware. Compared with the traditional PC field a few years ago, we are talking about up to 20 times performance improvement and up to 100 times artificial intelligence workload efficiency."

In addition, AMD also demonstrated a new game-end processor for traditional laptops and desktops. "This is the fastest consumer CPU in the world," said Su Tse-fung, as she also held up AMD's Ryzen 99950x chip. This 16-core processor will run at a speed of 5.7GHz in overclocking mode.

AMD, which is fully committed to AI data center and AIPC fields, receives strong bullish outlook from Wall Street.

In terms of AMD's stock price expectations, AMD, which has risen by 160% since 2023 with the help of the AI boom, is still strongly bullish by multiple investment institutions on Wall Street. The main logic behind these bullish research reports is basically concentrated on these institutions' extremely optimistic prospects for AMD's development in the AI field, especially in the two core fields of AI data center and AIPC.

In the AI data center field, relying on the continuously optimized performance of the CDNA architecture and high-performance AI chips based on 3D chiplet design, AMD is expected to continue to challenge Nvidia, the leader in this field. With unparalleled user loyalty accumulated by deeply cultivating the PC hardware field for many years, as well as mastering the core technologies of PC CPU and GPU and a solid software application ecosystem, AMD is expected to have a strong first-mover advantage in the AIPC field.

UBS Group sets AMD's target price expectation for the next 12 months at $215 (AMD's latest closing price is $166.90) and gives it a "shareholding" rating; Benchmark and UBS are both bullish on AMD to $200, and also give it a "shareholding" rating.

KeyBanc is the most optimistic about the target price, and is bullish on AMD to $230 in the next 12 months. The institution also expects AMD's MI300X sales to reach $800,000 in 2024, which is much higher than AMD's expected target of $400,000. This institution said that as Microsoft begins to push MI300X to cloud customers, tech companies such as Oracle, Amazon and Dell are expected to increase their purchase of AMD hardware.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment