share_log

AMD Earnings Call Transcript: Partnership with OpenAI to Drive $100 Billion Revenue, 2026 Product Iteration Critical for Business

cls.cn ·  Nov 5 13:05

①During the earnings call, Dr. Lisa Su, CEO of AMD, and Hu Jin, CFO of AMD, answered numerous questions from analysts; ②Lisa Su stated that the company's six-gigawatt Instinct GPU deployment agreement with OpenAI is expected to contribute over USD 100 billion in revenue over the next few years, while also confirming that the company’s AI business annual revenue will reach tens of billions of dollars by 2027.

Cailian Press News, November 5 (Edited by Liu Rui) – After the market closed on Tuesday Eastern Time, AMD (Advanced Micro Devices) released its Q3 2025 earnings report and held a conference call, delivering record-high results for both revenue and profitability.

During the earnings call, Dr. Lisa Su, CEO of AMD, and Hu Jin, CFO of AMD, addressed numerous questions from analysts, with particular emphasis on disclosing details of AMD’s multi-year agreement with OpenAI as well as updates on its technology and product advancements.

Lisa Su stated that the company's six-gigawatt Instinct GPU deployment agreement with OpenAI is expected to contribute over USD 100 billion in revenue over the next few years, while also confirming that the company’s AI business annual revenue will reach tens of billions of dollars by 2027.

Key Focus Areas of the Earnings Call

1. Performance Highlights: Revenue and profit growth driven by synergies across multiple business segments

Overall Performance: Q3 revenue of USD 9.2 billion (up 36% year-over-year and 20% quarter-over-quarter), net income increased by 31% year-over-year, free cash flow reached a record USD 1.5 billion, non-GAAP gross margin at 54%, diluted earnings per share at USD 1.20 (up 30% year-over-year), all core financial metrics exceeded market expectations.

Segment Breakdown: Data center business emerged as the primary growth driver, generating revenue of USD 4.3 billion (up 22% year-over-year and 34% quarter-over-quarter), with the fifth-generation EPYC Turin CPU accounting for nearly 50% of total EPYC revenue, and sales of the Instinct MI350 series GPUs significantly increasing; client and gaming business revenue totaled USD 4 billion (up 73% year-over-year), with Ryzen 9000 processors driving desktop CPU sales to new highs, while gaming revenue surged 181% due to strong performance of Radeon 9000 graphics cards and semi-custom products; embedded business revenue was USD 857 million (down 8% year-over-year but up 4% quarter-over-quarter), with design wins exceeding USD 14 billion within the year, continuing its record-breaking momentum.

2. AI Business: Record-breaking order sizes, breakthroughs in both technology and partnerships

Major partnership secured: A multi-year agreement signed with OpenAI to deploy six gigawatts of Instinct GPUs, with the first one gigawatt of MI450 series deployment scheduled for the second half of 2026, expected to contribute over USD 100 billion in revenue; Oracle will be the first partner to deploy tens of thousands of MI450 GPUs starting in 2026; collaboration with the U.S. Department of Energy to build the “Lux AI” factory and the “Discovery” supercomputer, strengthening AMD’s position in national-level AI and supercomputing domains.

Technology and Product Progress: The ROCm 7 software platform has achieved significant performance improvements (inference performance increased 4.6 times compared to the previous generation, training performance increased 3 times), gaining support from developers such as Hugging Face; the MI400 series GPU and Helios rack solution will be launched in 2026, supporting Meta's Open Rack Standard. Currently, the Venice 2-nanometer CPU has entered lab testing, with customer cooperation interest reaching an all-time high.

Market Demand and Supply Chain: Hyperscale cloud service providers exhibit strong demand for AI computing, with multiple customers planning to expand CPU deployments by 2026. The company ensures large-scale production of Helios through the sale of its ZT manufacturing business and partnership with Sanmina, while affirming that the supply chain is well-prepared to support growth demands from 2026 to 2027.

Future Outlook: Product iteration in 2026 will be pivotal, with clear paths for multi-business growth.

Short-term Guidance (Q4 2025): Revenue is projected at $9.6 billion (±$300 million), representing a 25% year-over-year increase, with double-digit growth in the data center business (ongoing capacity expansion for MI350), growth in client business, a short-term decline in gaming business, and recovery in embedded business. Non-GAAP gross margin is expected to be 54.5%, with operating expenses around $2.8 billion.

Medium- and Long-term Planning: In 2026, the focus will be on launching the Venice 2-nanometer CPU, MI400 series GPU, and Helios rack solution, aiming to achieve scaled growth in AI business. By 2027, annual revenue from AI business is targeted to reach “tens of billions of dollars.” The company also plans to drive continuous outperformance in client and server businesses through product portfolio optimization and market share gains.

Investor Concerns: Responses to Gross Margin, Customer Concentration, and Supply Chain Risks

Gross Margin Trend: Gross margins in the data center GPU business will gradually improve with increased capacity of next-generation products (e.g., MI400) and stabilize after a short transition period. The company prioritizes dual growth in revenue and gross profit.

Customer Concentration Risk: Although the collaboration with OpenAI is substantial, the company emphasizes having built a diversified customer base (including Oracle, the Department of Energy, and several hyperscale cloud service providers). Supply chain planning supports large-scale deployments across multiple customers, reducing reliance on any single customer.

Supply Chain and External Restrictions: Revenue from the MI308 graphics card has not yet been recognized but partial export licenses have been obtained. Discussions with customers regarding demand are ongoing, and supply will be adjusted based on market conditions. Risks from industry-wide power and component shortages are manageable, with the company collaborating with ecosystem partners to ensure deployment needs for 2026.

Below is the full transcript of the earnings call (AI-assisted translation, with some content edited for brevity).

Matt Ramsay, Vice President of Financial Strategy and Investor Relations at AMD: Thank you all for joining AMD's Third Quarter 2025 Financial Results Conference Call. The executives participating in this call include Dr. Lisa Su, AMD Chairman and Chief Executive Officer, and Jean Hu, AMD Executive Vice President, Chief Financial Officer, and Treasurer.

This conference is being live-streamed and will be available for replay via webcast on our official website after the event.

Before we begin, I would like to note that Dr. Lisa Su, along with AMD’s executive team, will present our long-term financial strategy at the “Financial Analyst Day” event in New York next Tuesday, November 11. On December 3 (Wednesday), Dr. Lisa Su will deliver a speech at the UBS Group Global Technology and Artificial Intelligence Conference. Finally, on December 10 (Wednesday), Jean Hu will speak at the 23rd Barclays Global Technology Conference.

Next, I will hand over the meeting to Dr. Lisa Su.

Dr. Lisa Su, AMD Chairman and Chief Executive Officer: Thank you, Matt, and good afternoon to everyone listening to today’s call. This quarter, we delivered outstanding results with record revenue and profitability driven by strong demand across data center artificial intelligence (AI), server, and personal computer (PC) businesses. Revenue grew 36% year-over-year to $9.2 billion, net profit increased by 31% year-over-year, and free cash flow more than doubled, primarily propelled by record sales of EPYC, Ryzen, and Instinct processors.

The record-breaking performance in the third quarter marks a significant leap in our growth trajectory. This was driven by the continued expansion of our computing portfolio and the rapid scaling of our data center AI business, both of which contributed to substantial increases in revenue and profitability.

Now, let me provide an update on our business segments. Revenue from the data center segment grew 22% year-over-year to $4.3 billion, reaching a new high, primarily due to increased production of the Instinct MI350 series graphics processing units (GPUs) and growing market share in servers. Server central processing unit (CPU) revenue reached a historical peak as adoption of the fifth-generation EPYC Turin processors accelerated, accounting for nearly 50% of total EPYC revenue this quarter. Additionally, sales of the previous generation EPYC processors remained robust this quarter, reflecting their strong competitive advantages across various workload scenarios.

In cloud computing, we achieved record sales. Hyperscalers expanded the deployment of EPYC CPUs, both to support their internal services and to power public cloud offerings. This quarter, hyperscalers launched over 160 cloud instances based on EPYC processors, including new Turin processor-related instances introduced by Google, Microsoft Azure, and Alibaba, delivering unparalleled performance and cost-effectiveness across diverse workloads. Currently, there are over 1,350 public EPYC cloud instances available globally, representing nearly a 50% increase compared to the same period last year.

Adoption of EPYC processors in the cloud by large enterprises more than doubled year-over-year. As our share in the on-premises deployment market grows, enterprise customers are increasingly demanding AMD cloud instances to support hybrid computing models. We expect cloud computing demand to remain robust as hyperscalers significantly enhance general-purpose computing capabilities while expanding AI workloads. Many customers plan to substantially increase CPU deployments in the coming quarters to meet rising AI demands, providing a strong new driver for our server business.

Turning to adoption in the enterprise market, sell-through of EPYC servers increased significantly both year-over-year and quarter-over-quarter, indicating accelerating adoption rates. Vendors such as HPE, Dell, Lenovo, and Supermicro have launched over 170 platforms based on the fifth-generation EPYC processors, marking our most comprehensive product portfolio to date, with solutions optimized for nearly all enterprise workload scenarios.

This quarter, we secured large new customers across multiple key verticals, including Fortune 500 companies in technology, telecommunications, financial services, retail, streaming, social media, and automotive sectors, further expanding our footprint across major industries. The superior performance and total cost of ownership (TCO) advantages of the EPYC product portfolio, coupled with increasing investments in marketing efforts and an expanding range of products from mainstream server and solution providers, have collectively established a solid foundation for us to continue gaining market share in the enterprise segment.

Looking ahead, we are on track to launch the next-generation EPYC Venice processor based on 2-nanometer technology in 2026. The Venice processor has entered laboratory testing and is demonstrating exceptional results, with significant improvements in performance, energy efficiency, and compute density. Customer demand and collaboration interest in Venice have reached unprecedented levels, reflecting both our competitive advantages and the growing need for data center computing power. Several cloud service providers and original equipment manufacturer (OEM) partners have already deployed the first Venice platforms, laying the groundwork for extensive solution availability and cloud deployments upon the product’s release.

Turning to data center AI business: Our Instinct GPU business continues to exhibit strong growth momentum. Driven by a substantial increase in sales of the Instinct MI350 series GPUs and the expanding deployment of the MI300 series, revenue in this segment grew year-over-year. Currently, several large cloud service providers and AI vendors have initiated deployments of the MI350 series, with more large-scale deployment plans expected to roll out over the coming quarters.

Oracle became the first hyperscale cloud service provider to publicly offer MI355X instances, which deliver significantly higher performance for real-time inference and multimodal training workloads on OCI ZetaScale superclusters. This quarter, emerging cloud service providers (Neocloud), such as Crusoe, DigitalOcean, TensorWave, and Vultr, also began rolling out public cloud services based on the MI350 series.

In the AI developer space, the deployment scope of the MI300 series GPUs expanded further this quarter. IBM and Zyphra will train multiple generations of future multimodal models on large-scale MI300X clusters; Cohere is already using MI300X on OCI to train its Command series models. In the inference domain, several new partners, including Character AI and Luma AI, are now running production-level workloads on the MI300 series, demonstrating the performance and TCO advantages of our architecture in real-time AI applications.

This quarter, we also made significant progress in the software domain. We launched ROCm 7, the most advanced and feature-rich version of ROCm to date. Compared to ROCm 6, ROCm 7 delivers up to 4.6 times higher inference performance and three times better training performance. Additionally, ROCm 7 introduces seamless distributed inference capabilities, enhances cross-hardware code portability, and adds enterprise-grade tools to simplify the deployment and management processes of Instinct solutions.

Importantly, our open software strategy has been well received by the developer community. Institutions such as Hugging Face, VLLM, and SGLang have directly contributed to the development of ROCm 7, helping us position ROCm as an open platform for large-scale AI development.

Looking ahead, our data center AI business is entering a new phase of growth. Even before the launch of the next-generation MI400 series accelerators and Helios rack-scale solutions in 2026, customer enthusiasm for collaboration has rapidly increased. The MI400 series combines a new compute engine, industry-leading memory capacity, and advanced networking capabilities, delivering a significant leap in performance for the most complex AI training and inference workloads.

The MI400 series integrates our expertise in chips, software, and systems to power Helios, our rack-scale AI platform. Designed to redefine performance and energy efficiency standards at the data center scale, the Helios platform integrates Instinct MI400 series GPUs, Venice EPYC CPUs, and Pensando network interface cards into a dual-wide rack solution optimized for the performance, power consumption, thermal management, and maintainability required by next-generation AI infrastructure, while supporting Meta’s new open rack-wide standard.

With deep technical support from an increasing number of hyperscale cloud service providers, AI enterprises, OEMs, and original design manufacturers (ODMs), the development of the MI400 series GPUs and Helios racks is progressing rapidly, laying the groundwork for large-scale deployments next year. The ZT Systems team, acquired last year, plays a critical role in the development of Helios, leveraging decades of experience building infrastructure for the world’s largest cloud service providers to ensure customers can quickly deploy and scale the Helios platform in their environments. Additionally, last week, we completed the sale of ZT’s manufacturing operations to Sanmina and established a strategic partnership designating Sanmina as the primary manufacturing partner for Helios. This collaboration will accelerate the deployment of our rack-scale AI solutions among large customers.

In terms of customer collaboration, we announced a comprehensive multi-year agreement with OpenAI to deploy 6 gigawatts (GW) of Instinct GPUs, with the first gigawatt-scale deployment of MI450 series accelerators expected to commence in the second half of 2026. This partnership establishes AMD as OpenAI’s core computing supplier and highlights our strategic advantages in hardware, software, and full-stack solutions.

Moving forward, AMD and OpenAI will engage in closer collaboration on future hardware, software, networking, and system-level roadmaps and technologies. OpenAI's decision to utilize the AMD Instinct platform for its most complex and sophisticated AI workloads clearly demonstrates that our Instinct GPUs, combined with the ROCm open software stack, can meet the performance and total cost of ownership requirements in the most demanding deployment scenarios. We anticipate that this collaboration will significantly drive the growth of our data center AI business, potentially generating over $100 billion in revenue over the next few years.

Oracle also announced that it will become a key launch partner for the MI450 series, planning to deploy tens of thousands of MI450 GPUs within Oracle Cloud Infrastructure (OCI) starting in 2026, with further scaling expected in 2027 and beyond.

Additionally, our Instinct platform is gaining increasing recognition in Sovereign AI and national supercomputing initiatives. In the United Arab Emirates (UAE), Cisco and G42 will deploy a large-scale AI cluster powered by Instinct MI350X GPUs to support the country’s most advanced AI workloads. In the United States, we are collaborating with the Department of Energy and Oak Ridge National Laboratory alongside industry partners such as OCI and HPE to establish Lux AI, the first AI factory focused on scientific discovery. This AI factory will leverage our Instinct MI350 series GPUs, EPYC CPUs, and Pensando networking products, and is expected to be operational in early 2026, providing a secure and open platform for large-scale training and distributed inference.

The U.S. Department of Energy has also selected our upcoming MI430X GPUs and EPYC Venice CPUs to power Oak Ridge National Laboratory’s next-generation flagship supercomputer, Discovery. This supercomputer aims to set new standards for AI-driven scientific computing and reinforce the United States’ leadership in high-performance computing. Our MI430X GPUs, designed to support national AI and supercomputing projects, will further solidify our leading position in enabling the world’s most powerful computers, driving the next generation of scientific breakthroughs.

In summary, our AI business is entering a new phase of growth, supported by our leading rack-scale solutions, expanding customer adoption, and growing global large-scale deployments. We are on a clear trajectory to achieve billions of dollars in annual AI-related revenue by 2027. I look forward to sharing more details about the growth plans for our data center AI business during next week’s Financial Analyst Day event.

Turning now to our Client and Gaming segment: Revenue grew 73% year-over-year to reach $4 billion. Our PC processor business performed exceptionally well, achieving record quarterly sales driven by strong demand conditions and the momentum from our leading Ryzen product portfolio.

Desktop CPU sales reached an all-time high, with robust demand for the Ryzen 9000 series processors driving both channel sell-in and sell-out volumes to new records. These processors deliver unparalleled performance across gaming, productivity, and content creation applications. Additionally, Ryzen-powered laptops saw significant sell-out growth through OEM channels this quarter, reflecting sustained end-user demand for premium gaming and commercial AMD PCs.

Momentum in the commercial segment accelerated further this quarter, driven by large-scale purchases from Fortune 500 companies in industries such as healthcare, financial services, manufacturing, automotive, and pharmaceuticals. Adoption of Ryzen-powered PCs surged, with commercial sell-out volumes growing over 30% year-over-year.

Looking ahead, with the strength of our Ryzen product portfolio, broader platform coverage, and increased investments in marketing efforts, we believe our client business is well-positioned to continue growing at a faster pace than the overall PC market.

In terms of gaming: revenue increased by 181% year-over-year, reaching USD 1.3 billion. Revenue from semi-custom products grew as Sony and Microsoft prepared for the upcoming holiday sales season. In the gaming graphics card segment, both revenue and channel unit sales achieved significant growth due to the leading cost-performance advantage of the Radeon 9000 series. Our machine learning super-resolution technology, FSR 4, gained rapid adoption this quarter, with the number of supported games doubling since its launch to exceed 85 titles. This technology enhances frame rates and delivers more immersive visual experiences.

Finally, in the embedded business segment: revenue declined by 8% year-over-year to USD 857 million. On a sequential basis, revenue and unit sales grew, driven by recovering demand across multiple markets, including simulation testing, aerospace and defense, industrial vision, and healthcare.

We further expanded our embedded product portfolio by launching new solutions, reinforcing our leadership in adaptive computing and x86 computing. We have begun delivering the second-generation Versal Prime series of industry-leading adaptive system-on-chips (SoCs) to key customers; provided the first batch of Versal RF development platforms to support multiple next-generation design projects; and introduced the Ryzen Embedded 9000 Series, which achieves industry-leading performance-per-watt and latency, suitable for robotics, edge computing, and smart factory applications.

The momentum of design wins for our embedded product portfolio remains strong, positioning us to set a record for design wins for the second consecutive year. To date, the total value of design wins for this year has exceeded USD 14 billion, reflecting the increasing adoption of our leading products across various markets and application scenarios.

Overall, our record-breaking performance in Q3 and robust outlook for Q4 demonstrate the powerful momentum being built across all our business segments, driven by sustained product leadership and disciplined execution. With the expansion of the total addressable market (TAM), accelerating adoption of the Instinct platform, and growing market share of EPYC and Ryzen CPUs, our data center AI, server, and PC businesses are poised to enter a period of strong growth.

Currently, the demand for computing power has reached unprecedented levels — every major breakthrough in commerce, science, and society now depends on stronger, more efficient, and smarter computing capabilities. These trends present AMD with unparalleled growth opportunities. I look forward to providing a detailed overview of our strategy, product roadmap, and long-term financial goals at next week's Financial Analyst Day.

I will now hand over the meeting to Hu Jin, who will provide further insights into our Q3 performance.

Hu Jin: Thank you, Dr. Lisa Su, and greetings to everyone this afternoon. I will first review our financial results and then outline the outlook for Q4 of fiscal year 2025.

We are pleased with our financial performance in Q3. This quarter, we achieved record revenue of USD 9.2 billion, representing a 36% year-over-year increase and exceeding the high end of our guidance, reflecting the strong momentum across our business segments. It is important to note that Q3 results did not include any revenue from the export of MI308 graphics cards to the Chinese market.

Revenue grew 20% sequentially, driven by robust growth in the data center, client, and gaming segments, along with moderate growth in the embedded business segment.

The gross margin was 54%, an increase of 40 basis points year-over-year, primarily driven by product mix optimization. Operating expenses amounted to approximately USD 2.8 billion, a year-over-year increase of 42%, reflecting continued heavy investments in research and development (R&D) to seize significant AI opportunities, as well as increased marketing expenditures to drive revenue growth. Operating profit reached USD 2.2 billion, with an operating margin of 24%. Total taxes, interest expenses, and other costs amounted to USD 273 million.

In the third quarter of 2025, diluted earnings per share (EPS) were USD 1.20, representing a 30% increase from USD 0.92 in the same period last year.

Next, I will provide an overview of each reportable business segment, starting with the Data Center segment. Revenue from the Data Center segment reached a record USD 4.3 billion, a year-over-year increase of 22%, driven by strong demand for fifth-generation EPYC processors and the Instinct MI350 series GPUs. On a sequential basis, revenue from the Data Center segment grew 34%, propelled by a significant increase in production capacity for the AMD Instinct MI350 series GPUs.

Operating profit for the Data Center segment was USD 1.1 billion, representing 25% of the segment's revenue, compared to USD 1.0 billion, or 29% of revenue, in the same period last year. The profit growth was primarily attributable to higher revenue, partially offset by increased R&D investments aimed at capturing significant AI opportunities.

Revenue from the Client and Gaming segment hit a record USD 4.0 billion, marking a year-over-year increase of 73% and a sequential increase of 12%, driven by robust demand for the latest generation of client processors and GPUs, as well as higher sales of gaming consoles. Within this, client business revenue reached a record USD 2.8 billion, growing 46% year-over-year and 10% sequentially, primarily driven by record Ryzen processor sales and an improved product mix. Gaming business revenue rose to USD 1.3 billion, increasing 181% year-over-year and 16% sequentially, supported by higher semi-custom product revenue and strong demand for Radeon GPUs.

Operating profit for the Client and Gaming segment was USD 867 million, representing 21% of the segment’s revenue, compared to USD 288 million, or 12% of revenue, in the same period last year. Profit growth was mainly driven by higher revenue, partially offset by increased marketing expenditures to support revenue growth.

Revenue from the Embedded segment amounted to USD 857 million, a year-over-year decrease of 8% but a sequential increase of 4%, reflecting some recovery in demand across multiple end markets. Operating profit for the Embedded segment was USD 283 million, representing 33% of the segment’s revenue, compared to USD 372 million, or 40% of revenue, in the same period last year. The decline in operating profit was primarily due to lower revenue and changes in the end-market mix.

Before reviewing the balance sheet and cash flow, please note that last week we completed the sale of our ZT Systems manufacturing business to Sanmina. The financial results of the ZT manufacturing business for the third quarter have been separately reported as discontinued operations in our financial statements and are excluded from non-GAAP financial metrics.

Regarding the balance sheet and cash flow: This quarter, cash flow from operating activities of our continuing operations was USD 1.8 billion, and free cash flow reached a record USD 1.5 billion. We returned USD 89 million to shareholders through stock repurchases, bringing total stock repurchases for the first three quarters of 2025 to USD 1.3 billion. As of the end of this quarter, our stock repurchase program had USD 9.4 billion of authorized capacity remaining.

As of the end of this quarter, our cash, cash equivalents, and short-term investments totaled USD 7.2 billion, while total debt stood at USD 3.2 billion.

Next, we will introduce the outlook for the fourth quarter of 2025. It should be noted that our Q4 outlook does not include any revenue generated from the export of AMD Instinct MI308 GPUs to the Chinese market.

We expect revenue for the fourth quarter of 2025 to be approximately $9.6 billion, with a margin of fluctuation of $300 million. The midpoint of this guidance corresponds to a year-over-year revenue growth rate of approximately 25%, driven by double-digit growth in both the data center and client/gaming segments, as well as a return to growth in the embedded segment.

On a sequential basis, we anticipate revenue growth of approximately 4%, driven by the following factors: double-digit growth in the data center segment (supported by robust server business performance and continued ramp-up of the MI350 series GPUs); a decline in the client and gaming segments (with growth in client revenue offset by a double-digit drop in gaming revenue); and double-digit growth in the embedded segment.

Additionally, we forecast a non-GAAP gross margin of approximately 54.5% for the fourth quarter; non-GAAP operating expenses of around $2.8 billion; net interest and other expenses projected at approximately $37 million in income; an effective non-GAAP tax rate estimated at 13%; and diluted shares outstanding expected to be approximately 1.65 billion.

In summary, our execution has been strong, with record-breaking revenue achieved in each of the first three quarters of this year. Our strategic investments are positioning us to fully capitalize on expanding AI opportunities across all end markets, driving sustainable long-term revenue growth and profitability while creating significant value for shareholders.

I will now hand the meeting back to Matt for the Q&A session.

Matt: Thank you very much. We can now begin collecting questions from the audience.

Host: Please hold on while we collect the questions. The first question comes from Vivek Arya of Bank of America Securities. Please go ahead with your question.

Vivek Arya: Thank you for taking my question. I have one short-term question and one medium-term question. In the short term, Dr. Lisa Su, could you provide some insights into the revenue mix of CPUs and GPUs for the third and fourth quarters? From a strategic perspective, how will your company manage the transition from the MI355 to the MI400 in the second half of next year? Can your company sustain the current (Q4) growth trajectory through the first half of next year before customers adopt the MI400 series, or should we anticipate a period of stagnation or market digestion?

Lisa Su: Thank you for your question, Vivek. Let me briefly address a few points. This quarter, our data center business performed exceptionally well, with both server and data center AI segments exceeding expectations. Importantly, these results were achieved without accounting for any sales of the MI308.

The production capacity increase for MI355 is proceeding very smoothly. We originally anticipated significant production growth in the third quarter, and indeed, that has been the case. Moreover, we are seeing further growth in server CPU sales — not only in the short term but also based on customer projections for the next few quarters, indicating sustained high demand, which is a positive signal.

Looking ahead to the fourth quarter, the data center business will remain strong, with revenue achieving double-digit growth quarter-over-quarter. Both the server and data center AI segments will contribute to this growth, driven by the continuous strength of these two business units.

Regarding your second question: Clearly, we have not yet disclosed specific plans for 2026, but based on the current situation, we expect market demand conditions to remain favorable. Therefore, we anticipate that MI355 will continue to expand production in the first half of 2026; as previously mentioned, the MI450 series will enter the market in the second half of 2026, at which point we expect data center AI business to experience even faster growth in the latter half of the year.

Vivek Arya: Understood. My follow-up question is: There is currently debate within the industry about whether OpenAI, while constrained by power supply and capital expenditure (CapEx) and considering existing cloud service provider (CSP) partnerships, is simultaneously collaborating with three major vendors and ASIC suppliers. What is your company’s perspective on this? What visibility do you have regarding initial collaboration with OpenAI? More importantly, what can we expect when the scope of cooperation expands further by 2027? Is there a way to estimate OpenAI’s resource allocation ratio among various suppliers? Or how should we view visibility issues in this key client collaboration?

Lisa Su: Excellent question, Vivek. Clearly, we are incredibly excited about our collaboration with OpenAI — it is a highly significant partnership. The AI industry is currently in a very unique phase, with extremely high demand for computing capabilities across various workloads. In our collaboration with OpenAI, we have planned for multiple quarters ahead to ensure that power supply and the supply chain can keep up in a timely manner.

The key information is that the first deployment at the gigawatt scale will commence in the second half of 2026, and preparations are proceeding well. Considering lead times and other factors, we are working closely with OpenAI and cloud service provider partners to ensure that we can deploy the Helios platform and related technologies as planned.

Overall, our collaboration is proceeding very smoothly. We have clear visibility into the production ramp-up schedule for MI450, and all tasks are progressing according to plan.

Moderator: The next question comes from Thomas O’Malley of Barclays. Please go ahead with your question.

Thomas O’Malley: Good morning. Thank you for taking my question, and congratulations on your excellent results. My first question relates to the Helios platform. Clearly, with the recent announcements around the Open Compute Project (OCP), customer engagement is expected to increase. Could you discuss how you see the mix between discrete component sales and system sales evolving next year? When do you anticipate the crossover point where system sales surpass discrete component sales? Additionally, what has been the initial feedback from customers who have had the opportunity to closely examine the Helios platform at trade shows?

Lisa Su: Thank you for the question, Tom. There is very high interest in MI450 and the Helios platform, and customer response at the OCP event was particularly enthusiastic. We engaged with numerous customers, many of whom brought their engineering teams to gain deeper insights into the system’s details and how it is constructed.

There has been ongoing discussion within the industry about the complexity of rack-based systems, and these systems are indeed highly intricate. We are extremely proud of the design of Helios — it incorporates all the features and functionalities that users expect, delivering outstanding performance in terms of reliability, processing power, and energy efficiency.

In recent weeks, interest in MI450 and Helios has further increased following announcements of our partnerships with OpenAI and Oracle Cloud Infrastructure (OCI), as well as collaborations with Meta at the OCP Summit.

Overall, we believe that the Helios platform is making good progress both in research and development and in customer collaborations. For rack-based solutions, we anticipate that early adopters of MI450 will primarily opt for full rack configurations; however, while other form factors will be available within the MI450 series, market demand for complete rack-based solutions is currently very high.

Thomas O'Malley: Very helpful. My follow-up question is broader in scope and somewhat related to Vivek's earlier inquiry. Based on the plans announced for early next year, some projects have significant power requirements. Additionally, the industry is facing supply issues related to interconnected memory components. As an industry leader, where does your company see potential bottlenecks emerging? Will component shortages occur first, or will data center infrastructure (such as physical space) or power supply become limiting factors affecting large-scale deployment plans for next year?

Lisa Su: Alright, Tom. The question you've raised is a challenge that our entire industry must address collectively — the entire ecosystem needs to coordinate planning, which is exactly what we're doing right now. We are working with customers to plan power supply solutions for the next two years; meanwhile, in areas such as chips, memory, packaging, and component supply chains, we are collaborating closely with supply chain partners to ensure that production capacity across all segments can keep pace in a timely manner.

Based on our current visibility, we have full confidence in the strength of the supply chain — it is well-prepared to support significant growth and meet the market demand for large-scale computing capabilities.

Of course, all segments will operate under tight conditions. Observing capital expenditure trends among certain companies reveals a strong willingness to expand computing capacity, and we are working closely to meet this demand. I would say that when the industry faces such supply constraints, the entire ecosystem mobilizes to address the challenges. Meanwhile, as we continue investing in areas such as power supply and component availability, we are seeing gradual alleviation of these bottlenecks.

In summary, we are highly confident that as we transition into the second half of 2026 and move toward 2027 with the launch of MI450 and the Helios platform, we will achieve substantial growth.

Host: The next question comes from Joshua Buchalter of TD Cowen. Please go ahead.

Joshua Buchalter: Hello everyone. Thank you for taking my question. I’d like to start with the CPU business. Both your company and its main competitors in the CPU space have noted strong demand for general-purpose servers driven by Agentic technology workloads. Could you comment on the sustainability of this trend? Competitors have mentioned supply chain constraints; are you observing similar conditions within your own supply chain? Additionally, should we view the data center CPU business as being in a non-seasonal cycle (i.e., not influenced by traditional seasonal patterns), or do you expect a return to normal seasonal fluctuations in the first half of next year?

Lisa Su: Alright, Joshua, let me share a few insights on the server CPU business. Over the past few quarters, we have been closely monitoring this trend. In fact, several quarters ago, we began observing positive signals in CPU demand. As we move further into 2025, we are seeing a gradual expansion in CPU demand — several large hyperscale cloud service providers have started forecasting significant increases in CPU deployments for 2026. From this perspective, the current demand environment for CPUs is highly positive.

This development is driven by the fact that AI workloads require substantial general-purpose computing power, which aligns perfectly with the production ramp of our Turin processors. The ramp-up of Turin processors has far exceeded expectations, and demand for this product is very strong. At the same time, demand for our other product lines remains robust and stable.

Regarding seasonality in 2026: We expect the CPU demand environment to remain positive throughout the year. By the end of the year, we will provide more detailed guidance, but at this stage, as AI workloads increasingly move into practical applications, the demand for computing power is set to continue growing, and CPU demand is expected to remain strong. This trend is sustainable and not a short-term phenomenon; it will likely persist over multiple quarters.

On the supply chain side, Joshua, we have sufficient capacity to support growth, especially in 2026, where we are well-prepared for capacity expansion.

Joshua Bouhlat: Thank you both for your answers. My follow-up question is for Dr. Lisa Su. In your prepared remarks, you mentioned progress on ROCm 7 — we know that ROCm has been a key area of investment for your company. Could you spend a minute or two discussing how ROCm is positioned competitively? What level of support can your company provide to the developer community? And in terms of narrowing potential competitive gaps, what areas does your company still need to focus on?

Lisa Su: Sure, Joshua, thank you for the question. ROCm has made significant progress, with ROCm 7 achieving major breakthroughs in performance improvements and expanded framework support. For us, ensuring 'day-zero support' for all the latest models and native support for all the latest frameworks is critical.

Currently, most new customers adopting AMD products experience a very smooth transition when migrating their workloads to the AMD platform. Of course, there is room for improvement — we are continuously expanding our library resources and overall ecosystem, particularly in next-generation workloads that combine training and inference with reinforcement learning.

Overall, the progress of ROCm has been remarkable. It is worth noting that we will continue to invest heavily in this area because delivering a seamless customer development experience is crucial for us.

Host: The next question comes from C.J. Muse of Tanner Fitzgerald. Please go ahead with your question.

C.J. Muse: Good afternoon, and thank you for taking my question. My first question is: As we transition from MI355 to MI400 and gradually move toward full rack-scale solutions, how should we think about the framework for gross margin fluctuations throughout 2026 (i.e., how will gross margins fluctuate)?

Hu Jin: Alright, C・J, thank you for your question. Overall, as we have mentioned previously, regarding the data center GPU business, after the launch of the new generation product, gross margin will continue to improve as production capacity increases. Typically, during the initial phase of capacity ramp-up, there will be a transitional period, followed by a gradual stabilization of gross margin.

We are not yet providing specific guidance for 2026, but the primary goal for the data center GPU business is to achieve significant revenue growth and an increase in gross profit, while also continuing to drive up the gross margin percentage.

C・J・Muse: Very helpful. My follow-up question is, Dr. Lisa Su, could you share your company’s growth expectations for 2026 and beyond — you previously mentioned that AI business revenue is expected to reach billions of dollars by 2027. From a macro perspective, how does your company view collaborations with OpenAI and other major clients? How should we understand the expansion of your customer base from 2026 to 2027? Any information on this would be very helpful.

Lisa Su: Alright, C・J. We will discuss this topic in more detail at next week's 'Financial Analyst Day' event, but I can share some macro-level perspectives now.

First, we are highly confident in our product roadmap and have made significant progress with major clients. Our collaboration with OpenAI is particularly meaningful, and achieving gigawatt-scale cooperation also demonstrates our ability to deliver large-scale computing solutions to the market.

In addition to OpenAI, we are also engaged in deep collaborations with many other clients. For example, we previously mentioned our partnership with Oracle Cloud Infrastructure (OCI) and announced several large system projects in collaboration with the U.S. Department of Energy. Currently, we have numerous other ongoing collaboration projects.

Thus, it can be understood as follows: During the MI450 product cycle, we expect multiple clients to achieve large-scale deployments, reflecting the breadth of our client partnerships. At the same time, we are scaling our supply chain to ensure we can meet the needs of our collaboration with OpenAI while also supporting the advancement of numerous other collaboration projects.

Host: The next question comes from Stacy Rasgon of Bernstein Research. Please proceed with your question.

Stacy Rasgon: Hello everyone. Thank you for taking my question. My first question is: In this quarter’s data center segment, which grew faster year-over-year in terms of dollar amount and percentage growth — the server business or the GPU business?

Lisa Su: Stacy, as we mentioned earlier, both the server business and the data center AI business (GPU business) within the data center segment achieved strong year-over-year growth this quarter.

Stacy Rasgon: Could you please elaborate further —— just in terms of growth trends, which business is growing faster? I don’t need specific numbers, just a general sense of the trend.

Lisa Su: In terms of growth trends, both are growing at a similar pace, but the server business is slightly faster.

Stacy Rasgon: Got it. Regarding guidance: your company mentioned that the data center segment as a whole will achieve double-digit growth, with the server business achieving 'strong double-digit growth.' What exactly does that mean? Does it imply growth exceeding 20%? I’d like to understand the specific definition of 'strong double-digit growth.' Additionally, for the full year, is GPU business revenue still within the approximately USD 6.5 billion range you mentioned last quarter? Current conditions seem to remain consistent with that expectation.

Hu Jin: Stacy, our guidance is that data center segment revenue will grow by double digits quarter-over-quarter, with strong growth in the server business and continued expansion of the MI350 series. The USD 6.5 billion revenue expectation you mentioned earlier was not part of our previously issued guidance.

Stacy Rasgon: Understood. So, when your company mentions 'strong growth' in the server business, does that imply its growth rate will exceed that of the Instinct (GPU) business? Because you did not explicitly mention the growth situation for the Instinct business.

Lisa Su: Stacy, let me clarify. Data center segment revenue will grow by double digits quarter-over-quarter, driven by both the server business and the data center AI business (GPU business), both of which will see growth. The 'strong double-digit growth' we referenced earlier likely pertains to year-over-year growth.

Host: The next question comes from Timothy Arcuri of UBS Group. Please proceed with your question.

Timothy Arcuri: Thank you very much. Dr. Lisa Su, it has been a month since you announced the collaboration with OpenAI. Could you share some examples of how this partnership has influenced your company’s market positioning and whether it has led to engagement with clients you had not previously worked with? That’s my first question.

The second question relates to the previous one: Looking at the timeframe from 2027 to 2028, the collaboration with OpenAI might account for about half of your data center GPU revenue. In your view, how risky is having such a high dependency on a single client?

Lisa Su: Okay, Tim. Let me address your questions in two parts. First, the collaboration with OpenAI has been in the works for quite some time, and we are pleased to publicly announce it while sharing key details such as deployment scale (gigawatt-level) and the duration of the partnership (multi-year). All of this information is very positive.

Moreover, over the past month, several other factors have driven our business growth. In addition to the collaboration with OpenAI, the comprehensive showcase of the Helios rack platform at the Open Compute Project (OCP) exhibition marked a significant milestone — customers were able to directly observe the engineering design and functional advantages of Helios.

If you ask whether more customers have expressed interest in cooperation or if the pace of collaboration has accelerated over the past month, the answer is yes. Currently, there is a generally high willingness among customers to collaborate, and the scale of cooperation is also larger, which is a positive signal.

Regarding customer concentration risk: One of the core strategic foundations of our data center AI business is to establish a broad customer base. We have consistently maintained partnerships with multiple clients. In terms of supply chain planning, we ensure sufficient production capacity to support deployments of similar scale by multiple clients between 2027 and 2028 — this is undoubtedly our goal.

Host: The next question comes from Aaron Rakers of Wells Fargo & Co. Please go ahead with your question.

Aaron Rakers: Thank you for taking my question. I would like to understand, given the strong performance of the server business, how to analyze the contribution ratio of volume growth versus average selling price (ASP) increase to revenue within the Turin product cycle? What are your expectations for changes in this ratio going forward?

Lisa Su: Alright, Aaron. In the server CPU business, the Turin processor offers richer functionality, so as production ramps up, the average selling price (ASP) has increased. However, as I mentioned in my prepared remarks, demand for the previous-generation Genoa processor remains robust — hyperscale cloud providers cannot immediately switch all deployments to the latest generation, so Genoa continues to maintain strong sales momentum.

From our perspective, current CPU demand is broad-based, covering various workload scenarios. This is partly due to the server refresh cycle, but based on our discussions with customers, the more significant driver is that AI workloads are generating more traditional computing demands, necessitating expanded deployment scales.

Looking ahead, we observe stronger customer interest in the latest generation products. Therefore, we are pleased with the progress in ramping up production of the Turin processor; simultaneously, we see very strong demand for the Venice processor, with early-stage collaboration projects already underway — reflecting the importance of general-purpose computing at present.

Aaron Rakers: Thank you for the clarification. My follow-up question is: Without wanting to preempt too much of the discussion planned for next week’s Financial Analyst Day, Dr. Lisa Su, you have consistently emphasized that the Total Addressable Market (TAM) for AI chips could reach $500 billion, and this figure continues to grow. Given the emergence of numerous large gigawatt-scale deployment projects, what is your latest view on the TAM for AI chips in the coming years?

Lisa Su: Aaron, as you said, we do not want to reveal too much of next week’s discussion in advance. However, it is certain that we will provide a comprehensive overview of our market outlook next week. Based on current observations, the TAM for AI computing is continuously expanding. Next week, we will disclose updated specific figures, but it is clear that while the initial $500 billion TAM already sounded substantial, we believe the market opportunity in the coming years will be even greater — this is undoubtedly an exciting trend.

Moderator: The next question comes from Antoine Chikaban of New Street Research. Please go ahead with your question.

Antoine Chikaban: Thank you for taking my question. I would like to ask whether the deepening partnership with OpenAI will drive further customization in your software stack? Could you share some insights into how this collaboration operates in practice and whether it contributes to enhancing the stability of ROCm?

Lisa Su: Sure, Antoine, thank you for your question. The answer is yes — every major customer collaboration is driving the expansion and deepening of our software stack. This is particularly true for our partnership with OpenAI, where we plan to engage in deep collaboration across hardware, software, systems, and future roadmaps. From this perspective, our cooperation with OpenAI on Triton (an inference framework) undoubtedly holds significant value.

However, I want to emphasize that beyond OpenAI, our collaborations with all major customers have played a crucial role in refining our software stack. We have allocated substantial new resources not only to serve large customers but also to work with numerous AI-native enterprises — these companies are actively developing based on the ROCm stack, providing us with extensive feedback.

Currently, we have made significant progress in both training and inference software stacks, and we will continue to increase investment moving forward. Therefore, the more customers adopt AMD products, the more it will drive improvements in the ROCm stack. We will have more discussions on this topic next week, and we are also leveraging AI technology to accelerate the development of ROCm kernels and the construction of the entire ecosystem.

Antoine Chikaban: Thank you for the explanation, Dr. Lisa Su. My follow-up question is: could you elaborate on the lifespan of GPUs? I understand that most cloud service providers (CSPs) depreciate GPUs over a period of 5 to 6 years, but in your communications with them, have you observed or heard any early indications that they are planning to extend the usage period of GPUs?

Lisa Su: Antoine, we have indeed observed some early signs. The key point is that, on one hand, when building new data center infrastructure, customers clearly prefer to adopt the latest and most advanced GPUs — for instance, MI355 is typically deployed in newly constructed liquid-cooled facilities, and the MI450 series will follow suit in the future; on the other hand, the demand for AI computing power remains extremely strong, so the deployment and use of previous-generation GPUs (such as MI300X) in inference scenarios remain active.

Therefore, the market currently exhibits a coexistence of two trends.

Moderator: The next question comes from Joe Moore of Morgan Stanley. Please proceed with your question.

Joe Moore: Thank you very much. You mentioned the MI308 GPU. I would like to understand your company's strategic positioning for this product: if export restrictions are relaxed in the future to allow shipments, is your company prepared? Could you elaborate on the potential revenue impact of this product?

Lisa Su: Alright, Joe. The situation with MI308 remains quite dynamic (with uncertainties), which is why we did not include its revenue in our Q4 guidance. We have obtained some export licenses for MI308, and we appreciate the support from government departments for these permits.

Currently, we are in communication with customers to understand the demand environment and potential market opportunities. In the coming months, we will provide more updates.

Joe Moore: Understood. However, if the market opens up, does your company already have products ready for supply, or will you need to rebuild inventory?

Lisa Su: We currently have some work in process and will continue to advance related efforts, but the specific supply situation still depends on the demand environment.

Joe Moore: Thank you very much.

Lisa Su: Thank you for the question.

Host: Operator, we might be able to take one last question.

Operator: No problem. The final question comes from Ross Seymore of Deutsche Bank. Please go ahead with your question.

Ross Seymore: Thank you for allowing me to ask a question. Dr. Lisa Su, although time may be limited before the hour ends, I would like to inquire: OpenAI has announced several gigawatt-scale collaborations. How does AMD plan to achieve true differentiation in this space? When observing this large client signing contracts with other GPU vendors and ASIC suppliers, what unique market strategies has your company adopted to ensure that it not only secures the initial 6-gigawatt deployment orders but also gains additional market share in the future?

Lisa Su: Alright, Ross. Currently, the global demand for AI computing power is extremely strong, which serves as a core backdrop. OpenAI is at the forefront of pursuing more AI computing power, but they are not alone—looking ahead, all major clients will significantly increase their demand for AI computing power over the next few years.

In terms of product positioning, each manufacturer has its own strengths. We believe that the MI450 series, particularly when combined with rack-mounted solutions, is highly competitive. In terms of both computational and memory performance, this product leads in both inference and training scenarios.

Key success factors include: time to market, total cost of ownership (TCO), deep partnerships, and planning for future product roadmaps—not limited to the MI450 series but also subsequent products. At present, we have thoroughly discussed plans for the MI500 and beyond.

We are confident that we will not only participate in this market but also capture a significant share amid strong demand. Over the past few years, we have accumulated extensive experience with our AI product roadmap and gained a profound understanding of the workload requirements of large customers—therefore, I am optimistic about our future market performance.

Ross Seymour: Very good. My follow-up question is: In your collaboration with OpenAI, your company adopted a unique structure, including granting partial warrants, and reportedly, the pricing mechanism was quite innovative, enabling a win-win outcome for all parties. Do you consider this a relatively unique agreement? Or given the global urgency for computing power, would AMD be open to using similar equity-based tools or other creative methods to collaborate with other clients to meet market demands?

Lisa Su: Alright, Ross. Given the unique phase the AI industry is currently experiencing, the cooperation agreement with OpenAI is indeed distinctive. Our primary objective in this partnership was to establish a deep, long-term collaboration that supports multi-generational, large-scale deployments—and we have clearly achieved that goal.

The structure of this partnership ensures a strong alignment of interests, creating a win-win model for all parties involved: AMD, OpenAI, and our shareholders can all benefit, and these gains will ultimately feed back into advancing our product roadmap.

Looking ahead, we are developing interesting collaborations with numerous clients, including large AI users and sovereign AI projects. We view every partnership as a unique opportunity to fully leverage AMD’s technical and other capabilities to create value for our partners.

Therefore, while the partnership with OpenAI is indeed unique, I believe there will be more opportunities in the future for us to integrate our capabilities into the ecosystem and play an important role in market development.

Host: Ladies and gentlemen, this concludes the Q&A session and brings our conference call to an end. Thank you for your participation. You may now disconnect.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment