share_log

“地球最重要股票”涨势远未结束?英伟达向10万亿美元市值发起冲锋

Is the “Earth's most important stock” rally far from over? Nvidia launches a charge to a market capitalization of 10 trillion US dollars

Zhitong Finance ·  May 29 19:16

Source: Zhitong Finance

Some analysts expect Nvidia's market capitalization to reach 10 trillion US dollars by 2030, and its next-generation Blackwell GPUs are expected to bring huge revenue contributions.

Beth Kindig (Beth Kindig), a technology industry analyst from the well-known investment institution I/O Fund, recently released a research report saying that it is expected that by 2030,$NVIDIA (NVDA.US)$The stock price will soar by about 258% from the current level, and the market capitalization is expected to reach 10 trillion US dollars at that time. The next generation of AI GPUs based on the Blackwell architecture is expected to bring huge revenue contributions.

Prior to Jindi's bullish report, Nvidia's stock price had risen more than 130% so far this year, and up as much as 670% since last year. This was due to multiple consecutive quarterly gross revenue surges brought about by the explosive growth in demand for its H100 AI GPUs. With a current market capitalization of 2.8 trillion US dollars, Nvidia is the third-largest listed company in the world by market capitalization, after Microsoft and Apple.

Last week, Nvidia, the leader in AI chips and named “the most important stock on the planet” by Goldman Sachs, once again announced unparalleled results that shocked global investors, which can be described as dispelling people's concerns about the slowdown in AI related corporate spending. Nvidia's total Q1 revenue increased 262% year over year to 26 billion US dollars. Total revenue hit a record high, and total revenue growth rate was Nvidia's third consecutive quarter with a year-on-year growth rate of more than 200%. Supported by strong demand for H100 GPUs, Nvidia's Q1 data center revenue increased 427% year over year to a record high of US$22.6 billion.

Nvidia has once again fully strengthened the “AI faith” of technology stock investors, thus driving global chip stocks to continue to rise recently, while also helping Nvidia's stock price to start a new round of crazy gains. After the financial report was announced, Nvidia's stock price broke through the $1,000 mark and the $1,100 mark one after another in just a few days, and closed at $1,139 on Tuesday.

Blackwell architecture AI, GPU may be Nvidia's new revenue engine

According to analyst Jindi, Nvidia's next generation of AI GPUs based on the Blackwell architecture will drive a new round of large-scale Nvidia performance growth. At the same time, it also emphasizes the importance of the Nvidia CUDA ecosystem to the entire AI field, and the importance of Nvidia's automotive-grade chips in the automotive industry will also be an important catalyst.

Analyst Jindi predicts that by the end of Nvidia's 2026 fiscal year, the revenue from Nvidia's Blackwell architecture AI GPUs will greatly exceed its predecessor GPU-H100. At that time, the Blackwell architecture is expected to drive Nvidia to achieve data center revenue of up to 200 billion US dollars.

“Blackwell will be empowering and launching over trillion large-scale language models, which is what big tech companies are working to achieve. Together, these hardware components are equivalent to an extremely large hardware data center system, and there is also the CUDA ecosystem as strong support... The third is the automotive business, so Nvidia has many positive catalysts. Right now, Nvidia is still in its very early stages.” Kindy said.

Strong performance and prospects highlight Nvidia's ranking as the best beneficiary of the AI craze for global corporate layout, and can be called the “strongest seller” in the AI field. Facing a surge in consumer demand for generative artificial intelligence products such as ChatGPT and Google Gemini, and increasingly important AI support tools such as AI software for other enterprises, data center operators from all over the world are making every effort to reserve the company's AI GPUs. These processors are extremely good at handling the heavy workloads required for artificial intelligence.

Regarding when the Blackwell-based AI GPUs newly released in March will be put into major data centers, Nvidia CEO Hwang In-hoon said that the new Blackwell architecture AI GPU products will be shipped in the second quarter of this year, production will increase in the third quarter, and officially launch in data centers in the fourth quarter. It is expected that “Blackwell architecture chip revenue will increase significantly” this year.

In Nvidia's new Blackwell architecture AI GPU press release in March, Tesla CEO Musk publicly shouted that Nvidia's AI hardware is “the best AI hardware.” Musk also compared technology companies' artificial intelligence arms race to a high-risk “poker game,” that is, companies need to invest billions of dollars in artificial intelligence hardware every year to maintain competitiveness.

According to information, the AI startup XAI founded by Musk recently raised up to 6 billion US dollars through Series B financing, making the valuation of this AI startup that was only a year old reach 24 billion US dollars. xAI said that this huge amount of money will be used to comprehensively promote xAI's first AI software products to market and build advanced AI infrastructure to accelerate research and development.

xAI's huge funding scale is a positive sign for the industry, showing that global capital is still willing to invest in the AI field and support the continuous development and iteration of AI technology. More importantly, xAI doesn't talk about developing in-house self-developed AI chips like Amazon, Google's parent company Alphabet, Facebook parent company Meta, and Microsoft, which means that a large amount of newly raised capital will probably be spent on Nvidia (NVDA.US) AI GPU hardware, and it is likely that funds will be concentrated on Nvidia's newly launched Blackwell architecture AI GPUs.

Cantor Fitzgerald analyst CJ Muse said that the XAI news has further strengthened the market's confidence in the continued surge in AI chip spending. He also said that Nvidia's previous strong earnings report completely dispelled concerns about a possible decline in AI chip demand in 2024. Cantor Fitzgerald recently released a research report saying that it is bullish on Nvidia to 1,400 US dollars, which is the highest target price on Wall Street.

CUDA+'s most powerful AI GPU forms Nvidia's “moat”, and Wall Street bullish research reports are becoming more and more aggressive

I/O Fund technology industry analyst Jindi has been optimistic about Nvidia for a long time because he believes that the company has an “indestructible moat” in the AI GPU business — that is, a huge scale in the field of AI chips in data centers. And the foundation that forms this moat is CUDA+'s most powerful AI GPU

Analyst Jindi predicts in the report that by 2027, the total potential market size of the global artificial intelligence data center market will reach 400 billion US dollars and reach 1 trillion US dollars by 2030, and the AI chip market is expected to be mainly occupied by Nvidia rather than its largest rivals AMD or Intel.

“Nvidia will take the lion's share,” Kindy said. “This is largely due to the CUDA ecosystem and the powerful performance of Nvidia's AI GPUs.

“The CUDA software platform is a collaborative platform that AI developers must use. So, the real reason that is similar to the iOS ecosystem barriers is that people are locked into iPhones because developers are developing apps for iPhones, and the same thing happened to Nvidia. In other words, the CUDA platform is content for AI engineers to learn to program GPUs, which helps lock them in. Coupled with a combination of high-performance GPUs, I now call it an insurmountable moat.” Kindy explained.

Kindy also said that customized artificial intelligence chips developed in-house by big tech companies such as Amazon and Alphabet will never compete directly with Nvidia. “They are more to meet specific needs and won't commercialize and sell AI chips like Nvidia, so Nvidia has broad room for development in this area,” Jindi said.

Nvidia has been deeply involved in the field of global high-performance computing for many years. In particular, the CUDA computing platform built by itself is popular all over the world. It can be described as the preferred software and hardware collaboration system in high-performance computing fields such as AI training/inference. The CUDA computing platform is a parallel computing acceleration platform and programming aid software developed exclusively by Nvidia. It allows software developers and software engineers to use Nvidia GPUs to accelerate parallel general computing (only supports Nvidia GPUs and is not compatible with mainstream GPUs such as AMD and Intel).

By only providing Nvidia's specific APIs, various libraries, and compiler support, this acceleration technology makes parallel computing on Nvidia GPUs more efficient and very easy to develop AI software — the core of AI software — that is, the development of large AI models. Therefore, CUDA can be described as a platform that is extremely dependent on the development of generative AI applications such as ChatGPT. Its importance is independent of the hardware system, and is essential for the development and deployment of large models of artificial intelligence. With its high technical maturity, absolute performance optimization advantages, and extensive ecosystem support, CUDA has become the most commonly used and widely popular collaborative platform for AI research and commercial deployment.

Nvidia's most popular AI chip H100/H200 GPU accelerator is based on Nvidia's groundbreaking Hopper GPU architecture and provides more powerful computing power compared to previous generations, especially in terms of floating-point computation, tensor core performance, and AI-specific acceleration. More importantly, the performance of AI GPUs based on the Blackwell architecture is far higher than that of the Hopper architecture. On the GPT-3 LLM benchmark with 175 billion parameter levels, the GB200 inference performance of the Blackwell architecture is 7 times that of the H100 system, and provides 4 times the training speed of the H100 system.

According to the latest Wall Street target price compiled by Tipranks, Wall Street investment institutions are generally optimistic about the future stock price trend of Nvidia, the AI chip hegemon. Recently, Nvidia's target price has generally been raised sharply. The main logic is that Blackwell-architected GPUs are expected to bring about a larger revenue scale. The average target price, which reflects unanimous expectations, is even as high as 1,185 US dollars, while the consistent rating is a “strong buy”, and the highest target price is even as high as 1,400 dollars.

editor/tolk

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment