AMD CEO Lisa Su projected that the annual revenue from AI business in data centers would reach a scale of 'tens of billions of US dollars' by 2027.
In its latest earnings call, AMD outlined an ambitious roadmap for its AI business, projecting 'tens of billions of dollars in annual revenue,' but failed to effectively alleviate investor concerns about the pace of its short-term growth.
Lisa Su, CEO of AMD, for the first time clearly outlined the company's long-term AI revenue target at the conference. It is anticipated that by 2027, the annual revenue from AI business in data centers will reach a scale of 'tens of billions of US dollars.' She also disclosed that the first computing clusters based on the next-generation MI450 series accelerators in the landmark partnership with OpenAI are expected to go online in the second half of 2026.
However, these positive long-term projections, combined with better-than-expected fourth-quarter revenue guidance, were insufficient to prevent the company’s stock price from falling more than 3% in after-hours trading. According to Bloomberg, some investors had higher expectations, suggesting the market believes AMD may reap rewards from the AI boom at a slower pace than previously anticipated.
During the conference call, Lisa Su faced repeated questioning from analysts regarding when the growth of AI chips would 'truly explode.' A key piece of information revealed during the meeting was that the company’s traditional server business growth slightly outpaced its highly anticipated AI chip division in the previous quarter, contrasting sharply with the market’s heated expectations of AMD as the 'next NVIDIA' in the AI space.
Key Takeaways from AMD's Earnings Call:
AI Revenue Target: AMD CEO Lisa Su explicitly stated during the call that the company's AI business is entering a new growth phase, aiming to achieve 'tens of billions of dollars' in annual revenue by 2027.
OpenAI Partnership Details: The details of the collaboration with OpenAI were disclosed for the first time. The first batch of 1 gigawatt (GW) MI450 series accelerators is scheduled to come online in the second half of 2026. This partnership is seen as a strong endorsement of AMD's hardware, software, and full-stack solutions.
Next-Generation Product Roadmap: The company's next-generation AI chips, the MI400 series, along with the Helios rack-level solution, are scheduled for release in 2026. These products have already secured deep technical collaborations and deployment commitments from major customers, including OpenAI and Oracle.
ROCm Software Ecosystem Progress: The company released ROCm 7, which has made significant progress in performance and functionality. It emphasized that its open software strategy is gaining support and contributions from developers, including Hugging Face.
Concerns over growth momentum: Although AMD's management painted a picture of “tens of billions” of dollars in AI revenue, analysts repeatedly pressed for details on short-term growth catalysts during the earnings call. Management acknowledged that the year-over-year growth in the traditional server CPU business slightly outpaced the highly anticipated data center AI (GPU) segment in the third quarter, becoming a focal point of investor concern.
Guidance failed to impress the market: While the fourth-quarter revenue guidance (approximately USD 9.6 billion) exceeded the average analyst forecast, it fell short of some investors' lofty expectations for explosive AI-driven growth. Coupled with an anticipated decline in gaming revenue, this has prompted a cautious reassessment of the company’s growth trajectory.

AI Growth Rate Takes Center Stage as Server Business Unexpectedly Steals the Spotlight
For an AI challenger that markets have pinned high hopes on, the growth rate of its AI business serves as the ultimate yardstick. However, what surprised investors most during this earnings call was the unexpected outperformance of AMD's traditional server CPU business over its AI GPU segment.
During the Q&A session, when directly asked by Stacy, an analyst at Bernstein Research, which grew faster year-over-year in the third quarter—server or GPU within the data center business—management responded, "Directionally, they are similar, but the server business performed slightly better, slightly better."
This statement confirmed some investors' concerns. The market had already driven AMD's stock price to elevated levels, with its valuation largely premised on the narrative of explosive growth in the AI business.
Additionally, the earnings report showed that the client business, which includes PC processors, surged 73% year-over-year, far outpacing the 22% growth of the data center business, further highlighting that AI business growth has yet to meet investor expectations.
While the "tens of billions" vision offers long-term promise, it does little to address near-term growth concerns.
In response to analysts' concerns about the growth rate, Lisa Su attempted to shift the focus toward a more long-term future. She reiterated the company's optimistic outlook on the AI market, forecasting that annual AI-related revenue would reach the "tens of billions" of dollars by 2027 and stating that the overall AI processor market would far exceed the previously estimated $500 trillion.
However, for investors, while long-term planning is important, the current growth trajectory remains equally critical. The earnings call revealed that the highly anticipated next-generation AI accelerator MI400 series and its Helios rack solution would not begin deployment for major clients like OpenAI until the second half of 2026. This immediately raised questions about growth momentum in the first half of 2026.
Analysts directly raised concerns about the product transition period at the conference: Will customers pause or slow down purchases of the current MI350 series before the more powerful MI400 series is released, potentially leading to a 'gap period' in growth? Lisa Su responded that the MI350 series is expected to continue gaining momentum through the first half of 2026. However, the market remains evidently cautious about the risk of a potential slowdown in growth.
Details of OpenAI Partnership Disclosed; Multiple "Whale" Clients Listed
One of the most closely watched highlights of this earnings call was the details of the partnership with OpenAI. Lisa Su confirmed that the two parties would deploy 6 gigawatts of Instinct GPUs, with the first batch of 1 gigawatt of MI450 series accelerators scheduled to go online in the second half of 2026. She emphasized that this was a "highly significant" multi-year, multi-generational deep collaboration, with AMD working closely with OpenAI and cloud service partners to ensure the availability of power and supply chains.
Lisa Su stated that OpenAI’s decision to use AMD's platform for developing its most complex AI workloads sends a clear signal: AMD’s Instinct GPUs and ROCm open software stack can meet the performance and total cost of ownership (TCO) requirements for the most demanding deployments. She anticipates this collaboration will 'significantly accelerate' the company’s data center AI business and has the potential to generate 'well over $100 billion' in revenue in the coming years.
In addition to OpenAI, AMD has also secured other heavyweight clients. Oracle has announced that it will be one of the first partners to deploy tens of thousands of MI450 GPUs starting in 2026. Furthermore, the U.S. Department of Energy has selected AMD’s upcoming MI430X GPU and Epic Venice CPU to build its next-generation flagship supercomputer, "Discovery." Responding to analyst questions, Lisa Su stated that the company aims to cultivate a broad customer base and is adjusting its supply chain to ensure sufficient capacity to serve multiple clients at a scale similar to OpenAI.
Full transcript of AMD’s Q3 2025 earnings call (translated by an AI tool):
Event Date: November 4, 2025
Company Name: AMD
Event Description: Q3 2025 Earnings Call
Source: AMD
Presentation Session
Host:
Good morning, and welcome to AMD's Third Quarter 2025 Earnings Conference Call. All participants are currently in listen-only mode. Following the formal presentation, there will be a question-and-answer session. (Operator Instructions). As a reminder, this conference call is being recorded. I am now pleased to introduce Matt Ramsey, Vice President of Financial Strategy and Investor Relations. Thank you, Matt. Please go ahead.
Matt Ramsey, Vice President of Financial Strategy and Investor Relations:
Thank you, and welcome to the AMD Third Quarter 2025 Financial Results Conference Call. By now, you should have had the opportunity to review our earnings press release and accompanying slides. If you have not yet reviewed these materials, they can be found on the investor relations section of AMD’s official website. During today’s conference call, we will primarily refer to non-GAAP financial metrics. The complete reconciliation tables for non-GAAP to GAAP figures can be found in today’s press release and the slides published on our website.
Today’s participants on the call include Dr. Lisa Su, our Chairman and Chief Executive Officer; and Jean Hu, our Executive Vice President, Chief Financial Officer, and Treasurer. This is a live call and will be replayed via webcast on our website.
Before we begin, I would like to note that Dr. Lisa Su and other members of AMD's executive team will present our long-term financial strategy at the Financial Analyst Day in New York next Tuesday, November 11th.
Dr. Lisa Su will deliver a speech at the UBS Global Technology and Artificial Intelligence Conference on Wednesday, December 3rd.
Finally, Jean Hu will present at the 23rd Annual Barclays Global Technology Conference on Wednesday, December 10.
Today's discussion contains forward-looking statements based on our current beliefs, assumptions, and expectations. These represent views as of today only and, therefore, involve risks and uncertainties that may cause actual results to differ materially from our current expectations. Please refer to the cautionary statement in our press release for more information on factors that could lead to material differences in actual results.
With that, I will now turn the call over to Lisa.
Dr. Lisa Su, Chair and Chief Executive Officer:
Thank you, Matt. Good afternoon, and thank you to everyone joining us online today. We had an outstanding quarter with record revenue and profitability, reflecting broad-based demand across our data center AI, server, and PC businesses. Revenue grew by 36% year-over-year to $9.2 billion. Net income increased by 31%, and free cash flow more than tripled, driven primarily by record performance in sales of EPYC, Ryzen, and Instinct processors.
Our record third-quarter results mark a clear acceleration in our growth trajectory, as our expanding compute portfolio combines with rapidly growing data center AI to drive significant revenue and earnings growth.
Turning to our business segments, Data Center revenue grew 22% year-over-year to a record $4.3 billion, driven by the ramp of our Instinct MI350 series GPUs and continued share gains in servers. Our server CPU revenue reached a new high as adoption of our fifth-generation EPYC processors accelerated rapidly, accounting for nearly half of total EPYC revenue this quarter.
Sales of our previous-generation EPYC processors also remained strong this quarter, reflecting their compelling competitive advantages across a wide range of workloads. In cloud, we achieved record sales as hyperscale cloud providers expanded deployments of EPYC CPUs to support both their first-party services and public cloud offerings.
Hyperscalers launched over 160 new EPYC-based instances this quarter, including next-generation products from Google, Microsoft Azure, Alibaba, and others, delivering unmatched performance and price-performance across a broad set of workloads. Globally, there are now more than 1,350 public EPYC cloud instances available, up nearly 50% from a year ago.
Adoption of EPYC in the cloud by large enterprises more than tripled year-over-year, as our growing share in the on-premises market is driving enterprise customers to adopt AMD cloud instances to support hybrid computing. We expect cloud demand to remain very strong as hyperscalers significantly expand their general-purpose compute capacity while scaling their AI workloads. Many customers are now planning larger CPU deployments over the next several quarters to meet AI-driven demand growth, providing a powerful new catalyst for our server business.
Regarding enterprise adoption, the actual sales volume of EPYC servers has grown significantly year-over-year and quarter-over-quarter, reflecting an acceleration in enterprise adoption. Over 170 fifth-generation EPYC platforms from companies such as HPE, Dell, Lenovo, and Supermicro have been launched, representing our most extensive product portfolio to date and offering solutions optimized for nearly all enterprise workloads.
This quarter, we secured large new orders from leading Fortune 500 technology, telecommunications, financial services, retail, streaming, social media, and automotive companies, expanding our footprint across key verticals. The performance and total cost of ownership advantages of our EPYC product portfolio, combined with increased go-to-market investments and a broader range of offerings from leading server and solution providers, position us well for continued enterprise market share gains.
Looking ahead, we remain on track to launch our next-generation 2-nanometer Venice processor in 2026. The Venice chips are already in the lab, showing exceptional results with significant improvements in performance, efficiency, and compute density. Customer demand and engagement for Venice are the strongest we have ever seen, reflecting our competitive advantage and the growing need for increased data center computing power.
Several cloud OEM partners have brought their first Venice platforms online, laying the groundwork for broad solution availability and cloud deployments at launch.
Turning to data center AI, our Instinct GPU business continues to accelerate. Revenue grew year-over-year, driven by the rapid ramp-up of MI350 series GPU sales and broader MI300 series deployments. Multiple MI350 series deployments are underway with major cloud and AI providers, with additional large-scale rollouts planned for the coming quarters.
Oracle became the first hyperscale cloud provider to publicly offer MI355X instances, delivering significantly higher performance for real-time inference and multimodal training workloads on OCI's massive superclusters. Neo cloud providers such as Cruso, DigitalOcean, TensorWave, and Vulture also began increasing the availability of their MI350 series public cloud offerings this quarter.
The deployment of MI300 series GPUs among AI developers also expanded this quarter. IBM and Zifra will train multiple generations of future multimodal models on large MI300X clusters, while Coher is now using MI300X on OCI to train its Command series models. On the inference side, new partners including Character AI and Luma AI are now running production workloads on the MI300 series, showcasing the performance and total cost of ownership advantages of our architecture in real-time AI applications.
We have also made significant progress on the software front. We released Rockham 7, our most advanced and feature-rich version to date, delivering up to 4.6x faster inference performance and up to 3x faster training performance compared to version 6. Rockham 7 also introduces seamless distributed inference, enhanced code portability across hardware, and new enterprise tools that simplify the deployment and management of Instinct solutions.
Importantly, our open software strategy is resonating with developers. Companies such as Hugging Face, VLM, and Lang have directly contributed to Rockham 7, and we are working to make Rockham an open platform for large-scale AI development.
Looking ahead, our data center AI business is entering its next phase of growth, with customer momentum rapidly building ahead of the 2026 launch of our next-generation MI400 series accelerators and Helios-scale solutions. The MI400 series combines new compute engines, industry-leading memory capacity, and advanced networking capabilities, delivering a major leap in performance for the most demanding AI training and inference workloads.
The MI400 series consolidates our expertise in silicon, software, and systems to power Helios, our hyperscale AI platform designed to redefine performance and efficiency at the data center level. Helios integrates our Instinct MI400 series GPUs, Venice EPYC CPUs, and Tensando NYX into a dual-wide rack solution optimized for the performance, power consumption, cooling, and maintainability required by next-generation AI infrastructure, while also supporting Meta’s new Open Rack Standard.
Development of our MI400 series GPUs and the Helios rack is progressing rapidly, supported by deep technical collaborations with an increasing number of hyperscale cloud providers, AI companies, and OEM/ODM partners, aiming for large-scale deployment by next year. The ZT Systems team, which we acquired last year, plays a critical role in this development, leveraging decades of experience building infrastructure for the world’s largest cloud providers to ensure customers can quickly deploy and scale Helios within their environments.
Additionally, last week, we completed the sale of ZT’s manufacturing operations to Samina and established a strategic partnership, making it our primary manufacturing partner for Helios. This collaboration will accelerate large-scale customer deployments of our AI solutions.
On the customer front, we announced a comprehensive multi-year agreement with OpenAI to deploy 6 exaflops of Instinct GPUs, with the first 1 exaflop of MI450 series accelerators scheduled to come online in the second half of 2026. This partnership establishes AMD as OpenAI’s core computing provider and highlights the strength of our hardware, software, and full-stack solutions strategy. Going forward, AMD and OpenAI will collaborate more closely on future hardware, software, networking, and system-level roadmaps and technologies.
OpenAI’s decision to run its most complex and sophisticated AI workloads on the AMD Instinct platform sends a clear signal that our Instinct GPUs and Rockham open software stack deliver the performance and total cost of ownership required for the most demanding deployments. We anticipate this partnership will significantly accelerate our data center AI business and has the potential to generate well over $100 billion in revenue in the coming years.
Oracle also announced it will be a key launch partner for the MI450 series, deploying tens of thousands of MI450 GPUs on Oracle Cloud Infrastructure starting in 2026 and continuing to scale through 2027 and beyond.
Our Instinct platform is also gaining traction in sovereign AI and national supercomputing projects. In the UAE, Cisco and G42 will deploy a large-scale AI cluster powered by Instinct MI350X GPUs to support the country’s most advanced AI workloads. In the United States, we are collaborating with the Department of Energy and Oak Ridge National Laboratory, along with our industry partners OCI and HPE, to build Lux AI, the first AI factory dedicated to scientific discovery. Powered by our Instinct MI350 series GPUs, EPYC CPUs, and Pensando networking, Lux AI will provide a secure, open platform for large-scale training and distributed inference when it comes online in early 2026.
The U.S. Department of Energy has also selected our upcoming MI430X GPU and EPYC Venice CPU to power “Discovery,” Oak Ridge’s next-generation flagship supercomputer, which aims to set the standard for AI-driven scientific computing and extend U.S. leadership in high-performance computing. Our MI430X GPU, designed to drive national-scale AI and supercomputing projects, extends our leadership in powering the world’s most powerful computers into the next generation of scientific breakthroughs.
In summary, our AI business is entering a new phase of growth, driven by our leading solutions, expanding customer adoption, and an increasing number of large-scale global deployments, placing us on a clear trajectory to achieve tens of billions of dollars in annual revenue by 2027. I look forward to providing more details about our data center AI growth plans at next week’s Financial Analyst Day.
In the client and gaming segments, division revenue grew 73% year-over-year to $40 billion. Our PC processor business performed exceptionally well, with quarterly sales reaching record levels, driven by strong demand and the breadth of our leading Horizon product portfolio accelerating growth. Desktop CPU sales hit all-time highs, with record channel replenishment and sell-through, fueled by robust demand for our Ryzen 9000 processors, which deliver unparalleled performance in gaming, productivity, and content creation applications.
This quarter, OEM actual sales of Ryzen-based laptops also grew significantly, reflecting continued demand from end customers for high-end gaming and commercial AMD PCs. Momentum in the commercial market accelerated this quarter, with PC actual sales growing over 30% year-over-year, driven by a significant increase in enterprise adoption, thanks to large orders won from Fortune 500 healthcare, financial services, manufacturing, automotive, and pharmaceutical companies.
Looking ahead, based on the strength of our product portfolio, broader platform coverage, and expanded go-to-market investments, we see significant opportunities to continue growing our client business at a faster rate than the overall PC market.
In gaming, revenue grew 181% year-over-year to reach $1.3 billion. Semi-custom revenue increased as Sony and Microsoft prepared for the upcoming holiday sales period. In gaming GPUs, both revenue and channel sales volumes grew significantly, driven by our leadership in performance-per-dollar with the Radeon 900 series. FSR4, our machine learning super-resolution technology that boosts frame rates and creates more immersive visuals, saw rapid adoption this quarter, with the number of supported games doubling since launch to over 85 titles.
Turning to our embedded segment, revenue declined 8% year-over-year to $857 million. On a sequential basis, both revenue and actual sales volumes improved as demand environments across multiple markets strengthened, primarily in test and simulation, aerospace and defense, industrial vision, and healthcare.
We expanded our embedded portfolio with new solutions that reinforce our leadership in adaptive and x86 computing. We began shipping our industry-leading Versal Prime Series Gen 2 adaptive SoCs to key customers, delivered the first Versal RF development platforms to support several next-generation design wins, and launched the Ryzen Embedded 9000 Series, which delivers industry-leading performance-per-watt and low latency for robotics, edge computing, and smart factory applications.
Design momentum for our embedded portfolio remains very strong. We are on track to achieve a record second consecutive year of design wins, with year-to-date totals exceeding $14 billion, reflecting the increasing adoption of our leading products across a broad set of markets and expanding application areas.
In summary, our record third-quarter results and strong fourth-quarter outlook reflect the significant momentum building across all areas of our business, driven by sustained product leadership and disciplined execution. Our data center AI, server, and PC businesses have each entered robust growth phases, supported by an expanding TAM, accelerating adoption of our Instinct platform, and share gains in EPYC and Ryzen CPUs.
The demand for compute power has never been greater, as every major breakthrough in business, science, and society now relies on stronger, more efficient, and smarter computing capabilities. These trends present unprecedented growth opportunities for AMD. I look forward to sharing more details about our strategy, roadmap, and long-term financial goals at our Financial Analyst Day next week.
Now, I will hand over the call to Jean, who will provide more details on our third-quarter performance. Jean? Thank you.
Jean Hu, Executive Vice President, Chief Financial Officer, and Treasurer:
Thank you, Lisa. Good afternoon, everyone. I will begin by reviewing our financial results, followed by providing our outlook for the fourth quarter of fiscal year 2025.
We are pleased with our strong third-quarter financial performance, achieving record revenue of $9.2 billion, representing a 36% year-over-year increase and exceeding the high end of our guidance, reflecting robust momentum across our businesses. Our third-quarter results do not include any revenue from shipments of MI308 GPU products to China. Revenue grew 20% sequentially, driven by strong growth in the Data Center as well as the Client and Gaming segments, along with moderate growth in the Embedded segment.
Gross margin was 54%, up 40 basis points year over year, primarily driven by product mix. Operating expenses were approximately [$1 billion], an increase of 42% year over year, as we continued to aggressively invest in R&D to capture significant AI opportunities and go-to-market activities to drive revenue growth. Operating income was $2.2 billion, with an operating margin of 24%. Taxes, interest expenses, and other items totaled $273 million.
For the third quarter of 2025, diluted earnings per share were $1.20, compared to $0.92 in the same period last year, representing a 30% year-over-year increase.
Turning now to our reportable segments, starting with Data Center. The Data Center segment achieved record revenue of over $4.3 billion, growing 22% year over year, primarily driven by strong demand for fifth-generation EPYC processors and the Instinct MI350 series GPUs. Sequentially, Data Center revenue increased by 34%, mainly due to the strong ramp-up of our AMD Instinct MI350 series GPUs.
The Data Center segment's operating income was $1.1 billion, or 25% of revenue, compared to $1 billion, or 29% of revenue, in the same period last year, benefiting from higher revenue but partially offset by increased R&D investments aimed at capturing significant AI opportunities.
The Client and Gaming segment achieved record revenue of $4 billion, increasing 73% year over year and 12% sequentially, driven by strong demand for the latest generation of client and graphics processors as well as stronger sales of gaming console products. Within the Client business, revenue reached a record $2.8 billion, growing 46% year over year and 10% sequentially, supported by record sales of our Ryzen processors and a richer product mix. Gaming revenue rose to $1.3 billion, surging 181% year over year and 16% sequentially, reflecting higher semi-custom revenue and strong demand for our Radeon GPUs.
Operating income for the Client and Gaming segment was $867 million, or 21% of revenue, compared to $288 million, or 12% of revenue, in the same period last year, benefiting from higher revenue but partially offset by increased go-to-market investments to support our revenue growth.
Revenue for the Embedded segment was $857 million, down 8% year over year. The Embedded segment grew 4% sequentially as we observed strengthening demand in certain end markets. Operating income for the Embedded segment was $283 million, or 33% of revenue, compared to $372 million, or 40% of revenue, in the same period last year. The decline in operating income was primarily due to lower revenue and shifts in end-market mix.
Before I review the balance sheet and cash flow, a reminder that we completed the sale of our ZT Systems manufacturing operations to [Samina] last week. The third-quarter financial results of the ZT manufacturing business are reported separately as discontinued operations in our financial statements and have been excluded from our non-GAAP financial data.
Turning to the balance sheet and cash flow, we generated $1.8 billion in cash from operations during the quarter, with record free cash flow reaching $1.5 billion. We returned $89 million to shareholders through stock repurchases, bringing total buybacks for the first three quarters of 2025 to $1.3 billion. As of the end of the quarter, we had $9.4 billion remaining under our authorized share repurchase program.
At the end of the quarter, our cash, cash equivalents, and short-term investments totaled $7.2 billion. Our total debt stood at $3.2 billion.
Now turning to our outlook for the fourth quarter of 2025. Please note that our guidance excludes any revenue from shipments of AMD Instinct MI308 to China. For the fourth quarter of 2025, we expect revenue of approximately $9.6 billion, plus or minus $300 million. The midpoint of our guidance represents approximately 25% year-over-year growth, driven by strong double-digit growth in our data center as well as our client and gaming segments, and a return to growth in our embedded segment.
On a sequential basis, we expect revenue to grow by approximately 4%, driven by double-digit growth in the data center segment (strong server business growth and continued ramp-up of our MI350 series GPUs), a decline in the client and gaming segments (with client revenue growing while gaming revenue declines significantly in the double digits), and double-digit growth in the embedded segment.
Additionally, we expect non-GAAP gross margin to be approximately 54.5% for the fourth quarter. We project non-GAAP operating expenses to be around $2.8 billion. We anticipate net interest and other expenses to result in income of approximately $37 million again.
We forecast our non-GAAP effective tax rate to be 13%, with diluted shares expected to be approximately 1.65 billion.
Finally, we have executed exceptionally well, achieving record revenue in the first three quarters of this year. Our strategic investments position us well to capitalize on expanding AI opportunities across all end markets, driving sustainable long-term revenue growth and margin expansion, thereby creating significant value for shareholders.
With that, I’ll turn the call back over to Matt for the Q&A session.
Matt Ramsey, Vice President of Financial Strategy and Investor Relations:
Thank you very much, Jean. John, we can now open the floor for questions from the audience.
Jean Hu, Executive Vice President, Chief Financial Officer, and Treasurer:
Thank you.
Host:
Thank you. We will now proceed to the Q&A session. (Explanation by the moderator).
Q&A Session
Host:
Please limit your questions to one primary question and one follow-up. Thank you. Please bear with us for a moment as we are collecting the questions.
The first question comes from Vivek Arya of Bank of America Securities. Please proceed with your question.
Vivek Arya, Analyst:
Thank you for taking the questions. I have one near-term and one medium-term question. On the near-term, Lisa, could you give us a sense of the [likely CPU/GPU] mix in the third and fourth quarters? Additionally, from a tactical standpoint, how are you managing the transition from MI355 to MI400 in the second half of next year? Can we expect continued growth from these fourth-quarter levels in the first half of next year, or should we anticipate some sort of pause or digestion period before customers begin adopting the MI400 series?
Unnamed Spokesperson:
Certainly, Vivek. Thank you for the question. A few points to note: our data center business was very strong in Q3. I believe we are seeing robust performance in both our server and data center AI businesses. As a reminder, this was achieved without any MI308 sales. The ramp-up of MI355 has been proceeding smoothly. We expect rapid scaling in Q3, and progress is on track. As I mentioned, we are also seeing strength in server CPU sales—not just in the near term, but customers have given us visibility into higher demand over the next few quarters, which is a positive sign.
Moving into Q4, data center performance remains strong with double-digit quarter-over-quarter growth, driven by growth in both server and data center AI businesses, again reflecting the strength of these segments. To address your question, while we have not yet provided guidance for 2026, based on what we see today, the demand environment for 2026 looks very favorable. Therefore, we expect MI355 to continue ramping through the first half of 2026. Then, as mentioned, the MI450 series will come online in the second half of 2026, and we anticipate that our data center AI business will accelerate further as we move into the latter part of 2026.
Vivek Arya, Analyst:
Understood. My follow-up question is that there has been some discussion within the industry about whether OpenAI has the capacity to collaborate with all three commercial GPU and ASIC suppliers simultaneously, given the constraints in power consumption, capital expenditure, and their existing CSP partners. What are your thoughts on this, and what is your visibility on the initial collaboration? More importantly, how does it scale toward 2027? Is there a way to model the allocation scenario, or how should we think about the visibility levels for this very important client? Thank you.
Unnamed Spokesperson:
Yes, certainly, Vivek. Look, we are obviously very excited about our relationship with OpenAI. It is a very important partnership. Think about it—this is a rather unique time for AI, where the demand for compute across all workloads is extremely high. In our collaboration with OpenAI, we are planning multiple quarters ahead to ensure power availability and supply chain readiness. The key point is that the first gigawatt-scale deployments will begin in the second half of 2026, and this work is already underway.
And we continue, considering lead times and other factors, to closely plan with OpenAI and our CSP partners to ensure we are all prepared for the deployment of Helios so that we can roll out the technology as planned. So overall, I think we are working together very closely. I believe we have good visibility into the ramp of MI450, and things are progressing very well.
Host:
The next question comes from Thomas O'Malley at Barclays. Please go ahead with your question.
Thomas O'Malley, Analyst:
Good morning. Thank you for answering my question, and congratulations on the strong results. My first question relates to the announcement at OCP. Customer engagement is clearly growing. Could you provide your perspective on how you see standalone sales versus system sales evolving next year? When do you anticipate this crossover might occur? And what has been the initial feedback from customers after gaining deeper insights at the event?
Unnamed Spokesperson:
Yes, certainly. Tom, thank you for the question. There's a lot of excitement around MI450 and Helios. I think the reception at OCP was very strong. We had a number of customers actually bring their engineering teams to learn more about the systems and how to build them. There’s been some discussion about how complex these rack-scale systems are, and they are indeed, and we're very proud of the design of Helios.
I think it has all the features, reliability, performance, power efficiency that you would expect. I think the interest in MI450 and Helios has really broadened over the last few weeks, especially with some of the announcements we've made with OpenAI and OCI, and then the announcement at OCP with Meta. So overall, I would say things are going very well from our perspective, both in terms of development and customer engagement. So in terms of the rack solution, we expect early adopters of MI450 to primarily be around rack-scale solutions. We will have other form factors for the MI450 series, but there is significant interest in the full rack-scale solution.
Thomas O'Malley, Analyst:
Very helpful. My follow-up is a broader question, similar to what Vivek asked. However, if you look at some of the early power requirement announcements for next year, they are quite substantial. Additionally, there are component-related challenges in areas like interconnects and memory. From your perspective as an industry leader, what do you think will be the limiting factor? Will it be the unavailability of components first? Or do you believe that data center infrastructure and/or power will become the constraining factors for these large-scale deployments next year? Thank you.
Unnamed Spokesperson:
Yes, certainly, Tom. I think the issue you're highlighting is something that, as an industry, we need to collectively address. The entire ecosystem must plan together, and that's exactly what we're doing. So, we're working with customers to map out their power plans for the next two years—literally the next two years. We are also collaborating with our supply chain partners from the standpoint of silicon, memory, packaging, and the component supply chain to ensure all that capacity is available.
I can tell you, based on our visibility, we feel very good about where we are. We have a strong supply chain ready to deliver what are very meaningful growth rates and massive amounts of compute demand. I think all of this will be tight. You can see some of the capex going in, there’s a desire to increase compute capacity, and we’re working very closely. I will say when these kinds of tensions occur, the ecosystem works very hard. So we’re also working hard to secure more power, more supply, and all of those things are improving. So the end result is, I think we’re well-positioned as we transition into MI450 and Helios in the second half of 2026 into 2027 to see meaningful growth.
Host:
The next question comes from Joshua Buffwalter of TD Cowen. Please go ahead with your question.
Joshua Buffwalter, Analyst:
Hey, everyone, thank you for taking the questions. I actually wanted to start on the CPU side. Both you and your largest competitor in that space have talked about strong near-term demand for general-purpose servers supporting AI workloads. Maybe you could talk about the sustainability of these trends. They mentioned supply constraints. Are you seeing anything like that in your supply chain? Are we in a period where we should think of the CPU business on the data center side as non-seasonal, or should we expect normal seasonality in the first half of next year? Thank you.
Unnamed Spokesperson:
Certainly, Josh. A couple of comments on the CPU server side. I think we’ve been observing this trend over the past several quarters, and we actually began seeing some positive signs in CPU demand a few quarters ago. As we move into 2025, we’re now seeing the breadth of CPU demand expand. Some of our large hyperscale customers are now forecasting significant CPU deployments in 2026. So from that perspective, I believe it’s a positive demand environment driven by AI requiring substantial amounts of general-purpose computing, which is good. This aligns well with our 'Turin' product cycle. The ramp-up of 'Turin' has been very rapid, and we’re seeing strong demand for that product as well as continued robust demand for our 'Genoa' product line.
Back to seasonality, as we look into 2026, I think we expect the demand environment for CPUs in 2026 to be positive. We'll provide more guidance as we get closer to the end of the year, but I expect the demand environment for CPUs in 2026 to be positive as we see this demand. I do think it's sustainable. This is not a short-term phenomenon. I think it's a multi-quarter phenomenon as we see demand really pick up with these AI workloads starting to go into real work. So Josh, from a supply standpoint, do we have the supply to support our growth, particularly as we look to 2026 and prepare for scaling?
Joshua Buffwalter, Analyst:
Understood. Thank you both. My follow-up question, Lisa, in your prepared remarks, you highlighted progress on [Rockham] 7. I know this has been an area of focus. Could you spend a minute or two talking about how you view Rockham's competitive positioning at this point, the breadth of support you're able to offer the developer community, and areas where you still need to work to close any potential competitive gaps? Thank you.
Unnamed Spokesperson:
Yes. Josh, thank you for the question. Look, we have made tremendous progress with Rockham. Rockham 7 is a significant advancement in terms of performance and all the frameworks we support. It is extremely important for us to secure day-zero support for all the latest models and native support for all the latest frameworks. I would say that the vast majority of customers starting to use AMD today are having a very, very smooth experience when migrating their workloads to AMD.
Clearly, there is always more work to do. We continue to enhance our libraries and overall environment, especially as we venture into some newer workloads where you see training and inference truly integrating with reinforcement learning. But overall, I think Rockham has made very strong progress. By the way, we will continue to invest in this area because it is crucial for improving our customers' development experience.
Host:
The next question comes from Muse at Cantor Fitzgerald. Please go ahead with your question.
Muse, Analyst:
Yes, good afternoon. Thank you for taking the question. I guess the first one, as you think about the transition from 355 to 400 and moving towards full rack scale, is there a framework we should have in mind for gross margin over the entirety of the 2026 calendar year?
Unnamed Spokesperson:
Yes, thank you for the question. I would say that overall, as we have said before, for our data center GPU business, as we ramp new generations of products, typically gross margins improve over time. Usually, you go through a transition period early in the ramp, and then gross margins normalize. So we haven't guided for 2026, but our priority within the data center GPU business is really to expand revenue growth and gross margin dollars. And certainly, alongside that, it will continue to drive gross margin percentage higher.
Muse, Analyst:
That's very helpful. I guess maybe, Lisa, just to dig into your growth expectations for 2026 and beyond, you talked about getting to tens of billions by 2027. Can you just talk broadly about how you see OpenAI and other large customers and how we should think about your customer penetration breadth through the 2026, 2027 calendar years? Any help there would be great. Thank you.
Unnamed Spokesperson:
Certainly, CJ. We will definitely discuss this topic in more detail at next week's Analyst Day, but let me provide some high-level points. Look, we are very excited about our roadmap. We believe we are seeing tremendous traction with our largest customers. The relationship with OpenAI is extremely important to us, and being able to talk about exascale is fantastic because we believe that is exactly what we can deliver to the market. But we are also working deeply with many other customers.
We mentioned OCI. We also announced several systems with the Department of Energy, which are significant systems. We have many other collaborations as well. So you should think of it this way: we expect multiple customers to reach very large scale in the MI450 generation. That’s the breadth of customer partnerships we have built and how we are planning our supply chain to ensure we can supply both our OpenAI partnership and many other partnerships that are proceeding smoothly.
Host:
The next question comes from Stacy at Bernstein Research. Please go ahead with your question.
Stacy, Analyst:
Hi, everyone. Thank you for addressing my question. My first question is about the performance of the data center this quarter. In terms of year-over-year growth in either dollar or percentage terms, which grew more—servers or [DTUs, possibly referring to GPUs]?
Dr. Lisa Su, Chair and Chief Executive Officer:
Yes. Stacy, I think our comment is that the data center achieved solid year-over-year growth in both areas, whether it's servers or data center AI. Yes, but can you, I mean just directionally, is one growing more than the other? I’m not even asking for numbers, just directionally. But directionally, it’s similar, though servers are slightly better, slightly better.
Stacy, Analyst:
Okay. Regarding the guidance, you mentioned that the data center as a whole will grow by double digits quarter-over-quarter. You also said that server growth is strong double digits. What does that mean? Is it over 20%? Or how should I interpret what you mean by strong double digits?
Unnamed Spokesperson:
Again, I am trying to, I mean, for example, regarding GPU for the full year, do you think, as you mentioned last quarter that it would be around USD 6.5 billion for the year, do you still believe it remains within this range? It feels like it is still there. Thank you, Stacy. Here is what we have guided: we expect data center revenue to grow by double digits quarter-over-quarter, and we said server demand will be strong. At the same time, we also mentioned that MI350 will see increased shipments. So, what you just mentioned is not part of our guidance.
Stacy, Analyst:
Okay. So if you say servers will grow strongly, does that mean their growth exceeds Instinct, because you didn't really make a similar comment about the latter?
Unnamed Spokesperson:
No, listen, Stacy, let me clarify. The data center segment is expected to grow by double-digit percentages quarter-over-quarter, and both servers and data center AI will grow. From their respective positions, I believe we are satisfied with the performance of both. The comment on 'strong double-digit percentage' may apply to year-over-year comparisons.
Host:
The next question comes from Timothy at UBS Group. Please go ahead with your question.
Timothy, Analyst:
Thank you very much. Lisa, I know it has only been a month since you announced this partnership with OpenAI, but could you share some anecdotes about how this has influenced your standing in the market with other clients? For instance, are you engaging with clients whom you might not have reached out to had this deal not happened? That was the first part of the question.
The second part of the question relates to a prior one, which suggests that [OpenAI] might account for roughly half of your data center GPU revenue in the 2027, 2028 timeframe. In your view, what is the level of risk posed by this single customer to your business?
Dr. Lisa Su, Chair and Chief Executive Officer:
Certainly, Tim. Let me make a few points. First, the partnership with OpenAI has been in the works for quite some time. We are pleased to be able to talk broadly about it, as well as about the scale of deployment and collaboration, which spans multiple years and involves multi-gigawatt projects. All of this, I believe, is highly positive. We also have numerous other partnerships. If you specifically ask about the past month, I would say it’s a combination of several factors at play.
I think the OpenAI partnership is one of them. I also believe that being able to fully showcase the Helios rack at the Open Compute Project was a very significant milestone because people were able to see the engineering and capabilities of the Helios rack. If you ask whether we’ve seen an increase or acceleration in interest, I would say yes. I think customers are engaging more broadly, perhaps at a larger scale, which is certainly a good thing.
From the standpoint of customer concentration, I think one of the critical foundations of our business is having a broad customer base. We work with many clients. I think we are structuring our supply chain in such a way that, as we move into the 2027-2028 timeframe, we will have sufficient supply to serve multiple clients of similar scale. That is certainly the goal.
Timothy, Analyst:
Thank you.
Host:
The next question comes from Aaron Rakers at Wells Fargo & Co. Please go ahead with your question.
Aaron Rakers, Analyst:
Yes. Thank you for the response. I am curious about the server strength that you are observing—could you provide some insights on how we should think about the relationship between unit growth and ASP expansion as we navigate through the "Turin" product cycle? How do you view this going forward?
Unnamed Spokesperson:
Yes. So Aaron, clearly on the server CPU side, 'Turin' certainly brings higher content value. So we're seeing ASP growth as 'Turin' ramps. But as I mentioned in my prepared remarks, we're actually still seeing good mix demand for 'Genoa.' So 'Turin' is ramping very quickly, but we also see sustained good demand for 'Genoa' because hyperscalers can't move everything immediately to the latest generation. So from our perspective, it feels like broad-based CPU demand across many different workloads.
It's a bit like a server refresh cycle, but when we look at the conversations with our customers, the breadth of workloads seems to be driven by AI workloads driving more traditional compute. So there's just more build-out required. And I think looking forward, one of the things we see is customers’ desire for the latest generation is even stronger. So while we're happy with the ramp rate of 'Turin,' we're actually seeing strong demand and early engagement for 'Venice,' which really speaks to the importance of specialized compute today.
Aaron Rakers, Analyst:
Yes. Thank you. As a quick follow-up, I don’t want to preempt next week’s discussion too much. However, Lisa, you have consistently mentioned the total TAM opportunity for AI silicon at $500 billion, and it is clear that progress is being made toward that target. I’m curious, as we consider these large megawatt-scale deployments, how do you see the updated view of the AI silicon TAM evolving as we look ahead?
Dr. Lisa Su, Chair and Chief Executive Officer:
Well, Aaron, as you said, not wanting to get too far ahead of what we're going to talk about next week. Listen, we're going to give you a full update on how we see the market next week, but suffice to say, from everything we're seeing, we see the AI compute TAM increasing. So we'll give you some updated numbers. But the point is, when we first talked about $500 billion, it sounded like a lot, but we think we have an even bigger opportunity in the next few years, which is very exciting. Thank you.
Host:
Thank you. The next question is from Antoine at New Street Research. Please go ahead with your question.
Antoine, Analyst:
Hi, thank you very much for answering my question. So, I would like to ask, could the evolving relationship with OpenAI potentially create tailwinds for the development of your software stack? Could you tell us how this collaboration works in practice and whether this partnership helps make Rockham more robust?
Unnamed Spokesperson:
Yes. Antoine, thank you for the question. I think the answer is yes. I think all of our large customers help us broaden and deepen our software stack. And certainly, the relationship with OpenAI is one where we plan to do deep collaboration on hardware, software, systems, and future roadmaps. From that perspective, our work with them on Triton has certainly been very valuable.
But I would say, beyond OpenAI, the work we are doing with all of our largest customers is really helpful in terms of strengthening the software stack. We have put significant new resources not only against the largest customers but also in working with a broad set of AI-native companies who are actively developing on the ROCm stack. We are getting a lot of feedback. I think we’ve made significant progress on both our training and inference stacks, and we will continue to double and triple down there.
So the more customers using AMD, I think all of that helps strengthen the ROCm stack. We’ll talk a bit more about this next week, but we’re also using AI to help us accelerate some of the ROCm kernel development as well as the overall ecosystem speed and cadence.
Antoine, Analyst:
Thank you. Thank you, Lisa. Perhaps as a quick follow-up question, could you tell us about the lifespan of GPUs?
Dr. Lisa Su, Chair and Chief Executive Officer:
I know that most CSPs depreciate them over five to six years, but in your conversations with them, I was just wondering if you're seeing or hearing any early signs that, in practice, they might plan to keep these GPUs in use longer than that. I think we are seeing some early signs, Antoine. The key point is clear: when building new data center infrastructure, there is definitely a preference for using the latest and best GPUs. Of course, when we look at the MI355, they typically go into new liquid-cooled facilities, and the same goes for the MI450 series.
But we are also seeing another trend, which is simply the demand for more AI computing power. From this perspective, some of the older-generation products, such as the MI300X, still perform quite well in terms of deployment and usage, especially for inference. So from that standpoint, I think you’re seeing a bit of both.
Host:
The next question comes from Joe Moore of Morgan Stanley. Please go ahead with your question.
Joe Moore, Analyst:
Great. Thank you. You mentioned MI308. I would like to know your position on that, and if there is some sort of relief allowing you to ship, are you prepared to do so? Can you give us an idea of how much of a swing factor this could represent?
Unnamed Spokesperson:
Certainly, Joe. Well, the situation regarding the MI308 remains quite dynamic. That’s why we didn’t include any MI308 revenue in our Q4 guidance. We have received some licenses for the MI308, so we appreciate the government's support for some of those licenses. We are still working with customers on the demand environment and the overall opportunity. So we will be able to provide more updates in the coming months.
Joe Moore, Analyst:
Alright, but if that market does indeed open up, do you have the product to support that market? Or will you have to start rebuilding inventory for that market?
Unnamed Spokesperson:
We have some work-in-progress. I think we will continue to hold those, but we need to see how the demand environment shapes up.
Joe Moore, Analyst:
Okay. Thank you very much.
Matt Ramsey, Vice President of Financial Strategy and Investor Relations:
Thank you. Moderator, I think we may have time for one more phone question.
Host:
Thank you very much. No problem. The next question, which is also the final question, is from Ross Seymore of Deutsche Bank. Please go ahead with your question.
Ross Seymore, Analyst:
Thank you for including me. Lisa, this may take longer than the time we have before the top of the hour, but there have already been several multi-gigawatt announcements from OpenAI. How is AMD truly differentiating itself there? When you see that major customer signing agreements with other GPU suppliers and ASIC suppliers, how do you approach that market differently from those competitors—not only to initially secure the 6-gigawatt order but also to gain more after that?
Dr. Lisa Su, Chair and Chief Executive Officer:
Certainly, Ross. So, listen, the environment I see is that the world needs more AI computing power. From that perspective, I think OpenAI is at the forefront in pursuing more AI computing power, but they are not alone. I think when you look across major customers, there is indeed demand for much more AI computing power in the coming years. I believe we each have advantages in positioning our products. I think the MI450 series, in particular, is an extremely powerful product and solution. You know, overall, when we look at compute performance and memory performance, we believe it is very well positioned for both inference and training.
I think the key here is time to market, total cost of ownership, deep partnerships, and thinking not just about the MI450 series but what comes after that. So we are having in-depth discussions about the MI500 and beyond. And you know, we certainly believe we are well-positioned, not just to participate, but to do so in a very meaningful way within this demand environment. I think we have learned a great deal through our AI roadmap over the past few years. We've made significant progress in understanding what the largest customers need from a workload perspective. So I am quite optimistic about our ability to capture a significant share of this market going forward.
Ross Seymore, Analyst:
Excellent. I think my follow-up question will directly build on this point. You granted some warrants through this transaction, creating a unique structure. I understand they will vest at a price that is highly favorable to all parties involved and ensures mutual satisfaction. Do you consider this a relatively unique agreement? Or, given the world’s growing need for more processing power, is AMD open to adopting similarly creative approaches with other equity instruments over time to meet this demand?
Dr. Lisa Su, Chair and Chief Executive Officer:
Certainly, Ross. I would say this is a unique agreement because it reflects a unique period in AI where we truly prioritize deep partnerships and multi-year, multi-generational large-scale collaboration. I think we’ve achieved that. We have a highly aligned incentive structure. Everyone wins, right. We win, OpenAI wins, our shareholders benefit, and all of that is reflected throughout the entire roadmap.
Looking ahead, we have a number of very interesting partnerships developing, whether with the largest AI users or sovereign AI opportunities that come to mind. We view each one as a unique opportunity and bring AMD’s full capabilities, whether technical or otherwise, into these collaborations. So while I would say OpenAI is very unique, I imagine there are many other opportunities where we can bring our capabilities into the ecosystem and participate in a meaningful way.
Host:
Ladies and gentlemen, this concludes the Q&A session, and it also concludes today’s conference call. Thank you for your participation. You may now disconnect.
This article is reprinted from 'Wall Street News', edited by Zhitong Finance: Li Fo.