share_log

谁能挡得住英伟达狂飙?黄仁勋要打造AI工厂

Who can stop Nvidia's frenzy? Hwang In-hoon wants to build an AI factory

新浪科技 ·  May 24 10:51

Source: Sina Technology
Author: Zheng Jun

Unstoppable and amazing! Even if the US stock market declined across the board due to concerns about inflation, it would not affect Nvidia's gains at all. After the announcement of its impressive first-quarter earnings report, the leading AI stock Nvidia continued its strong momentum. After yesterday's market, it easily broke through the $1,000 mark, and continued to hit new stock highs on Thursday. Finally, it closed with a 9.3% increase and closed at 1,038 US dollars.

The market capitalization is second only to Microsoft and Apple

Nothing seems to be stopping Nvidia's rise. With the strong rise of the AI trend, the stock price of Nvidia, the leading AI chip stock, also took off. Over the past two years, Nvidia's stock price has soared 523%, and has risen 120% since this year. This rocket-like rise is comparable to Tesla a few years ago, except that Tesla's current market value has shrunk by half from a high point.

In just one year, Nvidia's market capitalization has soared from a few hundred billion dollars to the current 2.55 trillion US dollars, ranking third among listed companies in the world, behind the two giants Microsoft and Apple, and even more than ten times the market value of other chip giants such as AMD, Qualcomm, and Intel. Although Nvidia's revenue only surpassed these chip giants last year, what the capital market values most is future growth space, simply because Nvidia dominates the AI chip field.

As the co-founder and CEO, Leather-wearing Hwang In-hoon holds nearly 4% of Nvidia's shares. As Nvidia's stock price continues to soar, his personal assets have also surpassed the 100 billion dollar threshold, making him not only the richest Chinese in the world, but also ranked in the top 15 of the world's super-rich. Hwang In-hoon has reached the peak of his life, not only his personal wealth, but also his influence in Japan, China, and heaven.

Huang Renxun, 61, is now famous in the US business community, even surpassing Yang Zhiyuan, the founder of Yahoo, who led the Internet boom in the late 90s, and is hailed by the US media as “the new Leonardo Vinci who started the AI industrial revolution.” Huang Renxun is actually five years older than Yang Zhiyuan, but he reached the peak of his life several decades later; Yang Zhiyuan was already worth 10 billion dollars when he was less than 30 years old, but Yahoo has now become an internet relic.

When it comes to the phrase “starting a new industrial revolution,” Hwang In-hoon really did not let go. At the analyst conference after the earnings report was released, he emphasized that Nvidia is starting a new industrial revolution. AI can not only significantly improve productivity in almost every industry, but also help companies reduce costs and improve energy efficiency.

It is no exaggeration to say that Nvidia is currently the most popular star stock in the US stock market. Over the past week, the US stock market has been anxiously awaiting the release of their earnings reports. The market has high expectations for Nvidia's earnings, which has also caused many investors to worry that once Nvidia does not continue its impressive performance, it may trigger a chain sell-off.

Furthermore, Nvidia's performance is not only related to the company's fundamentals and stock price trends, but also a barometer of the entire AI-related industry, directly related to the performance of many AI stocks. Even the many AI companies that Nvidia invests in are popular with investors. According to Nvidia's previous 13F filing with the SEC, the company has invested in a number of AI companies. SoundHound AI, a voice interaction AI company, is one of them. After Nvidia disclosed its holdings, the company's stock price soared 67% on the same day.

However, as a result, Nvidia also handed over financial reports that were higher than expected, which not only made the market feel at ease, but also further stimulated investors to enter the market, thus pushing Nvidia's stock price to break through the $1,000 mark in one fell swoop. Subsequently, investment bank Bernstein also raised Nvidia's target share price from $1,000 to $1,300.

However, it wasn't just brilliant financial reports that drove Nvidia's stock price to soar. Nvidia announced a 1-10 split plan after the market, which will be implemented on June 7. Although a stock split does not change the fundamentals of a listed company, it can lower the price per share, thereby attracting more retail investors. Over the past few years, giant companies such as Apple and Tesla have all used this method to further push up stock prices. In addition, Nvidia also increased its dividend per share from 4 cents to 10 cents, increasing the scale of shareholder returns.

Supply will still be in short supply next year

How is this a shining financial report? Nvidia achieved revenue of 26 billion US dollars in the first quarter, up 262% year on year, higher than market expectations of 24.6 billion US dollars, and net profit soared 644%, from 2 billion US dollars to 14.9 billion US dollars, higher than market expectations of 12.9 billion US dollars.

It is worth mentioning that Nvidia's revenue has increased by more than 200% for three consecutive quarters, and gross margin has increased by more than 10 percentage points for three consecutive quarters. The gross margin in the first quarter was even as high as 78.4%. At such a scale of revenue, it is almost unrivaled among giant companies to achieve such growth.

The performance fundamentals that investors are most concerned about are of course the pillars driving Nvidia's performance take-off, that is, the data center business related to AI processors. This business's revenue for the quarter increased 427% year over year, reaching US$22.56 billion. Although the game business, which used to be the core business, grew 18% year over year, it was only 2.6 billion US dollars. A new growth point that Hwang In-hoon personally values, the automobile business grew 11% year-on-year during the quarter to US$329 million.

In an analysts' conference call, Hwang In-hoon said that delivery of the latest AI processor, Blackwell, will begin next quarter, which is double the current Hopper processor. This new GPU, which costs more than 30,000 US dollars, has already been fully put into production. At that time, more than 100 OEMs will use it, which is also more than double that of Hopper processors.

According to him, Blackwell processors will be delivered in the second quarter of this year, fully launched in the third quarter, and end customers will be able to use the new platform in the fourth quarter. Nvidia does not manufacture processors themselves; they rely on TSMC foundry and then deliver them to server vendors such as Dell and Supermicro.

Over the past two years, Nvidia's AI processors have been in short supply, and Nvidia has to worry about how to supply customers fairly. And this state of affairs will continue to the new platform. Nvidia CFO Colette Kress (Colette Kress) said that demand for both Hopper and Blackwell platforms has far exceeded expectations, and the same will be true next year.

From game chips to AI chips

For most of its 30-year history, Nvidia was not a well-known company, nor was it as well-known as Apple, Microsoft, HP, and Intel. Only game users knew about this slightly niche chip company, and gaming graphics cards were also the main market for Nvidia GPUs.

Just two years ago, the gaming business was Nvidia's biggest source of revenue; now, Nvidia has completely become an AI chip company. The data center business's revenue has expanded dramatically, accounting for 87% of revenue, nearly nine times that of the gaming business, and the gap is still widening. It was still six times larger in the first quarter.

However, as OpenAI launched ChatGPT to set off a generative AI revolution, major giants continued to step up competition for computing power, driving a sharp increase in demand for data centers and pushing Nvidia, the main chip supplier in this field, to the peak of the industry. Nvidia's data center business began taking off in 2023.

According to data from industry analysis company Omedia, Nvidia currently accounts for more than 80% of the AI processor market. Whether it's OpenAI and its ally Microsoft, or Google, Amazon, and Meta, their AI arms race is inseparable from Nvidia's GPUs. Nvidia's production capacity can no longer meet the strong demand in the market, and the shortage of supply has prompted companies to continue to buy more Nvidia chips.

Who are the major customers of Nvidia's data center business? CFO Claes explained in an analysts' conference call that large cloud computing companies such as Google, Microsoft, Amazon, and Meta contributed more than $10 billion in revenue, accounting for nearly 45% of the data center business.

New growth comes from all walks of life

Over the past period, major giants have showcased their latest AI products. Whether it's OpenAI's GTP-4O, Meta's Llama 3, Google's Gemini, or Microsoft's CoPilot, the data centers behind them are all built on Nvidia's GPUs. Whoever doesn't invest more is likely to fall behind in the competition.

Tech giants' AI arms race still shows no sign of slowing down, and they are not hesitating to step up their investment locally. Just in the past few weeks, the four giants Amazon, Google, Meta, and Microsoft announced their new AI deployment strategies. They plan to invest a total of nearly 200 billion US dollars this year to increase the construction of data centers and processors. The size of this investment has soared 46% from last year.

Of course, cloud service companies are also a profitable investment to buy GPUs to build data centers, because they can provide AI-based cloud services to many enterprises. Claes said that by investing $1 in Nvidia chips, cloud computing companies can earn $5 in cloud service business over the next four years.

Hwang In-hoon stressed that not only are these cloud service giants competing in AI, but there are also more startups and consumer internet companies, including traditional industries such as automobiles and pharmaceuticals. He emphasized the strong demand from AI startups, saying that there are currently 15,000 to 20,000 AI startups from various fields waiting for Nvidia's chip supply and looking forward to training their AI models through Nvidia equipment.

In order to seize opportunities in the field of AI, startups are indeed inseparable from Nvidia, a “super arms dealer.” Sequoia Capital predicted in March of this year that various AI startups have invested 50 billion US dollars to purchase Nvidia processors to train their own big models, but their total revenue is only 3 billion US dollars.

Hwang In-hoon also specifically mentioned Musk's Tesla and the automotive industry's AI needs. He believes that all future cars will need to have autonomous driving fields, and Tesla is leading this transformation in the automotive industry. “Car companies will become the largest vertical data center companies this year.”

During his previous visit to China, Musk announced that this year he would invest 10 billion US dollars in AI research and development related to driverless cars and robots, and the reason behind this is also inseparable from data centers and Nvidia processors. Tesla has purchased 35,000 H100 GPUs, contributing more than $1 billion to Nvidia's revenue.

Build an AI factory to meet market competition

Of course, such a lucrative industry will naturally attract competition from tech giants. Moreover, tech giants are unwilling to completely bind AI strategies relating to their future to the Nvidia platform. Because Nvidia is also supplying chips to startups to help them challenge Amazon, Google, and Microsoft. Moreover, Nvidia also has its own cloud service business, and is even involved in cloud gaming.

On the one hand, Google, Microsoft, Meta, and Amazon continue to purchase Nvidia GPUs to continue their strong advantage in the field of AI training. On the other hand, they are also rushing to develop and deploy their own GPUs to reduce their dependence on Nvidia products.

Google's latest Gemini AI already runs on its own AI chip instead of relying on Nvidia. The AI giant spent 2 billion to 3 billion US dollars to build more than 1 million AI chips with Broadcom. Meta's second-generation chip Artemis will be put into production this year. Both Microsoft and Amazon announced their own production of AI chips in November of last year.

Cloud computing giants such as Google, Microsoft, and Amazon have a unique advantage: they themselves are the biggest users of AI chips, so they can easily achieve scale effects, thereby reducing costs. Also, they can design custom chips exactly according to their own needs. Moreover, they can also provide chips for startups they have invested in to expand their AI ecosystem.

Even though self-developed chips require huge investment, since these giants themselves need to purchase huge chips, self-development can save huge costs. Take Google as an example. Nvidia's AI chip sells for 15,000 to 30,000 US dollars, while the shared cost of Google's self-developed chip is only 2000-3,000 US dollars. Industry analysis firm TechInsights predicts that Google has actually become the third-largest AI processor company last year.

Moreover, these giants can also lock startups into their own ecological platforms through their own AI chips. In September of last year, Amazon announced an investment of up to 4 billion US dollars in San Francisco AI startup Anthropic. One condition is that the latter will use AI chips designed by Amazon in the future.

In addition to these giants, the two traditional chip giants, Intel and AMD, have also released their own data center AI processors, and are beginning to receive shipments and revenue. AMD CEO Su Zifeng said last month that AMD can get at least 4 billion US dollars in revenue from the AI processor field this year; and Intel also expects to receive 500 million US dollars in AI processor revenue in the second half of this year.

How to face competition from giant customers and traditional chip giants is also the biggest challenge facing Nvidia. However, Hwang In-hoon doesn't seem to be worried about this. He mentioned that this kind of market competition can bring more opportunities, and Nvidia is not just a GPU, but an “AI factory.”

He explained that generative AI means that the reasoning process has become extremely complex, involving not only predictive understanding of the context, but also the rapid generation of large numbers of tags. Nvidia's processors, on the other hand, have obvious advantages in the field of inference. They are not only suitable for training, but also support fast inference. They also have excellent compatibility and support a variety of deployment methods.

Hwang In-hoon emphasized that Nvidia built not only GPU processors, but AI factories, including CPUs, GPUs, memory, NVLink, InfiniBand, and Ethernet interactive machines. These AI factories are built through an overall architecture and platform deployment, so that more partners and enterprise customers can deploy flexibly. These are Nvidia's unique strengths.

Editor/Somer

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment