share_log

博通加速向Nvidia发起进攻

Broadcom accelerates its attack on Nvidia.

半導體行業觀察 ·  Jun 16 17:15

Source: Semiconductor Industry Watch. At yesterday's Conputex conference, Dr. Lisa Su released the latest roadmap. Afterwards, foreign media morethanmoore released the content of Lisa Su's post-conference interview, which we have translated and summarized as follows: Q: How does AI help you personally in your work? A: AI affects everyone's life. Personally, I am a loyal user of GPT and Co-Pilot. I am very interested in the AI used internally by AMD. We often talk about customer AI, but we also prioritize AI because it can make our company better. For example, making better and faster chips, we hope to integrate AI into the development process, as well as marketing, sales, human resources and all other fields. AI will be ubiquitous. Q: NVIDIA has explicitly stated to investors that it plans to shorten the development cycle to once a year, and now AMD also plans to do so. How and why do you do this? A: This is what we see in the market. AI is our company's top priority. We fully utilize the development capabilities of the entire company and increase investment. There are new changes every year, as the market needs updated products and more features. The product portfolio can solve various workloads. Not all customers will use all products, but there will be a new trend every year, and it will be the most competitive. This involves investment, ensuring that hardware/software systems are part of it, and we are committed to making it (AI) our biggest strategic opportunity. Q: The number of TOPs in PC World - Strix Point (Ryzen AI 300) has increased significantly. TOPs cost money. How do you compare TOPs to CPU/GPU? A: Nothing is free! Especially in designs where power and cost are limited. What we see is that AI will be ubiquitous. Currently, CoPilot+ PC and Strix have more than 50 TOPs and will start at the top of the stack. But it (AI) will run through our entire product stack. At the high-end, we will expand TOPs because we believe that the more local TOPs, the stronger the AIPC function, and putting it on the chip will increase its value and help unload part of the computing from the cloud. Q: Last week, you said that AMD will produce 3nm chips using GAA. Samsung foundry is the only one that produces 3nm GAA. Will AMD choose Samsung foundry for this? A: Refer to last week's keynote address at imec. What we talked about is that AMD will always use the most advanced technology. We will use 3nm. We will use 2nm. We did not mention the supplier of 3nm or GAA. Our cooperation with TSMC is currently very strong-we talked about the 3nm products we are currently developing. Q: Regarding sustainability issues. AI means more power consumption. As a chip supplier, is it possible to optimize the power consumption of devices that use AI? A: For everything we do, especially for AI, energy efficiency is as important as performance. We are studying how to improve energy efficiency in every generation of products in the future-we have said that we will improve energy efficiency by 30 times between 2020 and 2025, and we are expected to exceed this goal. Our current goal is to increase energy efficiency by 100 times in the next 4-5 years. So yes, we can focus on energy efficiency, and we must focus on energy efficiency because it will become a limiting factor for future computing. Q: We had CPUs before, then GPUs, now we have NPUs. First, how do you see the scalability of NPUs? Second, what is the next big chip? Neuromorphic chip? A: You need the right engine for each workload. CPUs are very suitable for traditional workloads. GPUs are very suitable for gaming and graphics tasks. NPUs help achieve AI-specific acceleration. As we move forward and research specific new acceleration technologies, we will see some of these technologies evolve-but ultimately it is driven by applications. Q: You initially broke Intel's status quo by increasing the number of cores. But the number of cores of your generations of products (in the consumer aspect) has reached its peak. Is this enough for consumers and the gaming market? Or should we expect an increase in the number of cores in the future? A: I think our strategy is to continuously improve performance. Especially for games, game software developers do not always use all cores. We have no reason not to adopt more than 16 cores. The key is that our development speed allows software developers to and can actually utilize these cores. Q: Regarding desktops, do you think more efficient NPU accelerators are needed? A: We see that NPUs have an impact on desktops. We have been evaluating product segments that can use this function. You will see desktop products with NPUs in the future to expand our product portfolio.

$NVIDIA (NVDA.US)$Many competitors are vying for its market dominance. One of the names that keeps popping up is $Broadcom (AVGO.US)$If you look closely, you'll see why. Its XPU power consumption is less than 600 watts, making it one of the most energy-efficient accelerators in the industry.

Bank of America said in a report to investors this week that it sees it as the top choice for AI. It's not talking about Nvidia, although Bank of America sees the Green Team as the uncontestable winner in the GPU wars. It's talking about Broadcom, the company that recently announced a 10-for-1 stock split and better-than-expected earnings in its second-quarter earnings report. The company expects sales in fiscal 2024 to exceed $51 billion.

Bank of America analysts predict that the company's sales in fiscal year 2025 will reach $59.9 billion, a 16% YoY growth. Analysts point to efficiency gains from the VMWare acquisition last year, revenue growth, and potential growth in custom chips as key indicators in their 2025 forecast for Broadcom. If Bank of America's predictions are correct, Broadcom's market cap could put it in the trillion-dollar club alongside other tech giants such as Microsoft, Apple, Nvidia, Amazon, Alphabet, and Meta.

To achieve this goal, it must compete with Nvidia, whose current market cap of $804 billion far surpasses Broadcom's $34 trillion. In addition, Nvidia's CUDA architecture has achieved near-monopoly status in super-scale enterprise AI workloads (such as Meta, Microsoft, Google, and Amazon), which are its largest customers. It has a large software, tool, and library ecosystem, which further locks in customers and sets a high entry barrier for competitors like Broadcom.

These companies all hope to reduce their dependence on Nvidia, so Broadcom positions itself as an alternative, providing customized AI accelerator chips (called XPUs) for cloud computing and AI companies. In a recent event, Broadcom noted that demand for its products is snowballing, pointing out that two years ago the most advanced cluster had 4,096 XPUs. In 2023, it built a cluster with over 10,000 XPU nodes, requiring two layers of Tomahawk or Jericho switches. The company's roadmap is to scale it up to over 30,000 and ultimately 1 million.

One advantage that Broadcom emphasizes is the energy efficiency of its XPUs. Its power consumption is less than 600 watts, making it one of the industry's lowest power-consuming AI accelerators.

Broadcom also has a different view of the chip market, saying that it is shifting from centering around CPUs to centering around connectivity. In addition to CPUs, the emergence of alternative processors such as GPUs, NPUs, and LPUs requires high-speed connections, which is exactly what Broadcom excels at.

Editor/tolk

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment