Nvidia achieved a revenue of $35.1 billion in the third quarter, an increase of 17% compared to the previous quarter and a 94% year-on-year growth. The company stated during the conference call that they are fully committed to producing Blackwell to meet strong demand. Company officials mentioned that during the initial launch of Blackwell, the company's gross margin will decrease to a low point of above 70%, and it will rebound to the mid-range of 70-80% by the second half of next year.
On November 21st, Financial Link News (Editor: Qin Jiahe) reported that on November 20th, Nvidia held its 2025 fiscal year third-quarter earnings conference call, introducing the company's third-quarter financial situation, business performance, and fourth-quarter earnings guidance. Subsequently, CEO Huang Renxun and CFO Colette M. Kress provided detailed information about the company and answered questions from analysts.
In terms of financial data, Kress introduced that the third quarter was a record quarter for Nvidia. The company achieved a revenue of $35.1 billion, an increase of 17% compared to the previous quarter and a 94% year-on-year growth, far exceeding the previously expected $32.5 billion. Under GAAP and non-GAAP guidelines, the gross margin was 74.6% and 75%, respectively.
The company expects that during the initial launch of Blackwell, the gross margin will decrease to above 70%, for example, 71%-72.5%. It is projected to recover to the mid-range of 70-80% by the second half of next year.
Furthermore, in terms of segmented business revenue performance: data center business revenue reached $30.8 billion, a 112% year-on-year increase; gaming business revenue was $3.3 billion, a 15% year-on-year growth; record-breaking automotive business revenue was $0.449 billion, a 72% year-on-year increase.
In terms of business, the next-generation GPU, Blackwell, is currently being produced at full capacity to meet strong demand, resulting in a supply shortage. However, the company is working on expanding production capacity. The company is currently executing its roadmap as planned to reduce training and inference costs and promote the widespread adoption of AI.
However, this does not mean that the demand for the current GPU system, Hopper, will immediately diminish. The company mentioned that the demand for Hopper may continue to grow and be sustained for the first few quarters of next year.
For the fourth quarter expected guidance, Nvidia anticipates revenue of $37.5 billion, with a fluctuation of 2% up or down. In terms of the gaming business, due to supply constraints, revenue for the fourth quarter is expected to decrease compared to the previous quarter.
The following is a detailed content of the earnings conference Q&A session.
Q: Has the expansion of Large Language Models (LLMs) stagnated? What is driving the demand for Blackwell?
A: The company's expansion of pre-training for basic models continues, and two additional expansion methods have been discovered. One is post-training scaling, initially human feedback reinforcement learning (RLHF), now the company also has AI feedback reinforcement learning, as well as expansion based on synthetic data; the second is inference time scaling, or test time scaling, such as the ChatGPT o1 model, the longer the inference time, the better the quality of the generated answers.
The existence of these three expansion methods will significantly increase the demand for infrastructure. The capacity of the previous generation models is about 0.1 million Hoppers, while the next generation starts from 0.1 million Blackwell chips.
In addition, the application of Agentic AI by enterprises has become the latest trend, further driving the increase in demand.
Q: Does the company have the capability to execute the roadmap presented at GTC this year and ensure product releases? What are the reasons for the supply constraints of Blackwell at the current stage?
A: Production of Blackwell is proceeding at full speed, delivering more products this quarter than previously estimated, and will continue to increase production and supply next year.
The current situation of supply not meeting demand is expected. The company is at the beginning of the new generation of basic models, driven by the revolutionary development of generative AI and physical AI fields, resulting in strong demand for Blackwell.
In academic terms, physics ai refers to physical systems that can perform tasks typically associated with intelligent organisms, achieving the synergistic evolution of organism, control, morphology, action execution, and perception. Huang Renxun once stated that the next wave of the AI era is physics ai, which can also be understood as the era of siasun robot&automation.
In terms of the supply chain, Blackwell offers different combinations such as air-cooled/liquid-cooled, NVLink 36, NVLink 72, x86/Grace, and has received support from global manufacturers including taiwan semiconductor, ampere computing, sk hynix, foxconn, micron, dell, hp, amd, lenovo, and others.
Regarding the roadmap, everything is currently proceeding as planned. The company will continue to execute the roadmap, improve performance, reduce training and inference costs to make ai more popular; at the same time, under limited electrical utilities, improve the efficiency of energy consumption per unit and generate higher revenue for customers.
Q: Does the company believe that in the fourth quarter Blackwell will surpass Hopper? Does the expansion of Blackwell imply that gross margin pressure will peak in the fourth quarter, dropping to just over 70%?
A: During the initial ramp-up of Blackwell production capacity, the gross margin is expected to be above 70%; while in subsequent quarters, the company hopes to quickly increase it to around 70%-80% range.
Hopper's demand will continue into the first few quarters of next year; the shipment volume of Blackwell in the next quarter will exceed this quarter, continuing to increase in the following quarter.
The current market is facing two major shifts in the computing field: shifting from CPU to GPU, and datacenters are turning into ai factories. These two trends have just begun, and it is expected that this modernization and the creation of new industries will continue for several years.
Q: Can the gross margin reach the mid-70% range in the second half of next year? When does the company believe the market will enter the hardware digestion phase? How many quarters of shipment volume are needed to meet the first wave of demand?
In the second half of 2025, achieving a gross margin of 70% is a reasonable target, but it still depends on the specific situation.
The digestion period will not appear quickly.
Currently, most datacenters still rely on CPU running, which is outdated. In the next few years, global datacenters will undergo upgrades, shifting towards architectures that support machine learning. This transformation is part of the demand.
At the same time, generative AI as a new submarket is also growing rapidly, such as Runway, Open AI, and Harvey. The emergence of these native AI companies stems from the emergence of new opportunities.
What does the company consider as slightly over 70% for the gross margin? Will Hopper's revenue decline in the next quarter compared to the previous one?
A gross margin slightly over 70% means lower than the mid-70s, possibly 71%, 72%, or 72.5%. It could also be higher than this, depending on the actual situation. The company aims for a gross margin of around 75%.
Hopper will continue to be sold in the fourth quarter, and growth is possible, but further observation is needed.
What is the current state of the inference market? Will Hopper chips be used for inference? Will the demand for inference surpass training in the next 12 months?
The company hopes to achieve large-scale reasoning, hoping that every company can reason around the clock, and users are constantly generating tokens in all aspects of using the computer.
The rise of physical AI, which can understand the structure and meaning of the physical world, predict the future, has extremely high value for industrial AI and robotics technology, and is also the reason the company is establishing Omniverse.
The company's goal is reasoning. The difficulty of reasoning lies in the need to balance high accuracy, high throughput, and low latency. In this context, NVIDIA has a strong architecture and ecosystem to ensure developers can quickly innovate on top of the architecture. Anything can be run on the NVIDIA computing platform.
What is the development status of the network business?
The network business has grown significantly year-on-year. Although there was a slight decline in the business this quarter, it will soon recover its growth trend. Currently preparing for Blackwell and more systems.
In the future, the demand for network technology remains strong in providing services for large-scale systems.
What is the progress of Sovereign AI and the restricted supply situation in the gaming business?
Sovereign AI is an important growth area after the rise of generative AI. Countries are making efforts to build basic models that conform to their own languages and cultures. Sovereign AI and future pipelines are still absolutely intact.
In terms of gaming business supply, Nvidia is working hard to expand the supply chain for different products. The current challenge is how to quickly supply products to the market. It is expected that the supply situation will return to normal in the new year, but it may be tight in this quarter.
Q: Will the quarterly growth accelerate with the increase in supply volume?
A: The company only provides guidance for one quarter at a time. Once we enter the next quarter, further updates on the situation will be provided.
Q: Discussions on the new US government and related business in China?
A: Regardless of the new government's decisions, Nvidia will fully support the government's choices and, while complying with regulations, endeavor to support customers and engage in market competition.
Q: How is the computational workload of pre-training, reinforcement, and inference distributed throughout the entire AI ecosystem?
A: Currently, the majority of resources are allocated to pre-training basic models because the new technologies for post-training have just been launched. Measures taken in pre-training or post-training are aimed at minimizing the inference cost per person as much as possible. The scale of the three is currently very reasonable. In the future, the company will continue to expand the scale of the three extensions to significantly increase performance in order to reduce costs and increase revenue.