share_log

英伟达效应下的“新叙事”?铜互联、玻璃基板之后 GB200或带火这一环节

A “new narrative” under the Nvidia effect? The process of GB200 or ignition after copper interconnection and glass substrates

cls.cn ·  May 25 08:26

① “We're ready for the next wave of growth.” Hwang In-hoon anticipates that Blackwell architecture chips will bring in a large amount of revenue this year. ② Demand for H200 and Blackwell architecture chips will far exceed supply, and Nvidia expects this to continue until next year. ③ The broker expects the HDI board value of the GB200 single GPU to increase by 244% to 476% compared to the DGX series.

“Science and Technology Innovation Board Daily”, May 25 — Revenue increased 262% year on year and net profit soared 628% — Nvidia's 2025 quarterly report once again far exceeded analysts' expectations.

“We're ready for the next wave of growth.” At the performance meeting, Huang Renxun said this when talking about the Blackwell platform.

He said that since the launch of Blackwell architecture chips in early March, they have been “in production for some time”. They will be shipped in the second quarter of FY2025, production will increase in the third quarter, and are expected to be installed in customer data centers in the fourth quarter; it is expected that Blackwell architecture chips will bring in a large amount of revenue this year.

Nvidia CFO Colette Kress pointed out that demand for H200 and Blackwell architecture chips will far exceed supply, and this situation is expected to continue until next year.

As for Nvidia's supply chain partners, many GB200 production trends have also been reported recently:

Quanta Computer previously revealed that the Nvidia GB200 server is expected to be mass-produced in September.

On May 20, there was also news that ultra-micro computers will ship more than 10,000 GB200-equipped AI servers next year, accounting for up to 25% of Nvidia's overall GB200 cabinet. Recently, the supply chain has been notified to prepare ultra-microcomputers one after another. Ultramicrocomputer has begun to step up its efforts to expedite the supply chain, and even directly issue clear shipment volume targets to strengthen supply chain confidence, and it is hoped that the supply chain can prepare goods early, so that cabinets equipped with GB200 can be delivered to end customers on time next year.

▌What else is Nvidia's next “gold suction” tool worth looking forward to?

Judging from Hwang In-hoon's statement this time, he is very confident in the demand and “ability to suck money” of Blackwell and GB200.

KeyBanc analysts have previously predicted that demand for Nvidia's GB200 rack-scale computing system will be high, and the average price may be between $1.5 million and $2 million. The system combines Nvidia's Grace CPU and Blackwell GPU. Nvidia's GB200 can generate between $90 billion and $140 billion in annual revenue.

Meanwhile, around GB200, the secondary market has set off several concept frenzy one after another.

As soon as Nvidia released the GB200 at the GTC conference, the copper cable products it used attracted a great deal of attention, and “high-speed copper cable” concept stocks soared; another Damo Telegram rumor last week made the market focus on glass substrates, and related concept stocks also continued to rise for many days; on the 21st, there was also news that Nvidia is planning to introduce fan-out panel-level packaging into the GB200 as early as possible, from 2026 to 2025.

So in addition to the hot spots mentioned above, what other links are expected to get on the GB200 Express?

Maybe HDI too.

Changjiang Securities pointed out that this change in the GB200 NVL72 architecture has led to the disappearance of traditional UBB used in DGX series servers in the past. In the past, UBB used a multi-layer PCB solution, and the new NVLink Switch Tray is expected to use the HDI solution.

Fangzheng Securities's May 22 report also suggests that GB200 is expected to drive a significant increase in HDI usage.

The GB200 NVL72 is a full-frame solution. Its overall integration is constantly improving, and at the same time, all dimensions of performance, high-frequency high-speed materials, bandwidth transmission rate, and power consumption and cooling have increased exponentially. Increased integration corresponds to increased PCB wiring density and improved transmission and cooling capabilities, which are the advantages of HDI boards. Among them, NVLink Switch PCBs are similar to switch products or adopt HDI solutions, which will further increase the amount of HDI used in servers.

According to analysts' estimates, the total PCB value of GB200 NVL72 is estimated to be about 24900 to 33,945 US dollars, and the corresponding HDI value of a single GPU is about 263 to 459 US dollars, an increase of about 171.9% to 374.4% compared to the H100's 97 US dollars. AI server PCBs are fully evolving to HDI.

The predicted increase in GF Securities's April 24 report is even more optimistic: the OAM in the DGX A100/H100/B100 is HDI, and the HDI board of a single GPU is worth 67-80 US dollars, while the GB200 NVL72 motherboard, network card, DPU, and Nvlink Switch module board are all HDI, and the value of the HDI board of a single GPU is 275-386 US dollars. Compared with the DGX series HDI value, the HDI value increased by 244% to 476%.

In terms of market size, according to Prismark data, the global HDI market is expected to reach 10.54 billion US dollars in 2023, and is expected to reach 14.23 billion US dollars by 2028, with a 5-year CAGR of 6.2%.

According to incomplete statistics from the “Science and Technology Innovation Board Daily”, the A-share companies that have already deployed HDI are:

It is worth mentioning that in Nvidia's AI narrative, if CUDA is the core of the moat, then the rapidly iterative product roadmap is probably another solid bastion after that.

Prior to that, Nvidia would launch a new architecture approximately every two years: from Ampere in 2020, Hopper in 2022, and Blackwell this year. But now, the update interval will be directly cut in half to one year. At this performance conference, Hwang In-hoon said that Nvidia will now design a new chip every year. “After Blackwell, there is another chip. Our pace is one year.”

Huang Renxun did not announce the specific name of this chip, but well-known analyst Guo Mingyi revealed on May 8 this year that Nvidia's next-generation AI chip R series/R100 AI chip will be mass-produced in the fourth quarter of 2025, and the system/cabinet solution is expected to be mass-produced in the first half of 2026. The R100 will use TSMC's N3 process and Cowos-L package, and is expected to be equipped with 8 HBM4s.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment