share_log

黄仁勋最新对话:人形机器人将成为主流,售价会跟廉价汽车差不多

Hwang In-hoon's latest conversation: Humanoid robots will become mainstream, and the price will be about the same as cheap cars

騰訊科技 ·  Apr 22 20:13

Source: Tencent Technology

Draw priorities

① Hwang In-hoon believes that life science, robotics, and autonomous driving will become the three major industries in the future.
② Humanoid robots will become mainstream in the future, and the price will be between 10,000 to 20,000 US dollars, which is close to the current price of cheap cars.
③ The anti-Moore's law is true. If it doesn't move to accelerated computing, and if it doesn't move to artificial intelligence, the computer industry will experience anti-Moore's law.
④ From a vertical perspective, artificial intelligence will revolutionize the way we respond to climate change, help use less energy, improve energy efficiency, etc.

According to foreign media reports, Nvidia CEO Hwang In-hoon recently attended the “CadenceLive Silicon Valley 2024” conference. During this time, Hwang In-hoon had a conversation as a guest with Anirudh Devgan (Anirudh Devgan), CEO and President of Cadence, the organizer of the conference. In the conversation between the two sides, Hwang In-hoon talked about the key role of artificial intelligence and accelerated computing in shaping major trends in the industry, and how the two companies are cooperating to promote the transformation of EDA (Electronic Design Automation), SDA (an alliance of leading digital media interface technology vendors in the industry), digital biology, and artificial intelligence.

Cadence is a software company specializing in electronic design automation. It was formed by the merger of SDSystems and eCAD in 1988, and is the world's largest provider of electronic design automation, semiconductor technology solutions and design services. The company's customers are all the world's most innovative companies, in the fields of hyperscale computing, 5G communications, automotive, mobile, aerospace, consumer, industry, and healthcare. Cadence mainly provides customers with products from chips and circuit boards to complete systems. Chip manufacturers such as Nvidia and AMD are all major partners of Cadence.

Here's a summary of the conversation:

DIVGAN: Hello everyone. I am honored to introduce Hwang In-hoon to you. Of course, I don't need to introduce him any more. Our company has a long-standing partnership with Nvidia. It's amazing that Hwang In-hoon and Nvidia are changing the world. In today's conversation, Wong In-hoon will share with us his experience of leading a company, transforming an industry, and developing new markets over the past 30 years. Next, we invite Hwang In-hoon.

Huang Renxun: I am very happy to be able to participate in this dialogue event. I love designers, love design tools, and love Cadence. I also like Cadence's Allegro (PCB design and wiring tool) and Palladium (integrated circuit design verification platform software) products. For me, Palladium is the only app more important than a refrigerator. Palladium is the single most important app of my life, and Nvidia is also the app's biggest customer. Nvidia installed and operated the first supercomputers using Palladium. We're incredibly obsessed with Palladium, so we love what you do, and we couldn't have done a good job without Palladium Nvidia. Thank you very much.

DIVGAN: Thank you. We love our partnership with Nvidia. It's worth noting that you know you're now a “godfather,” and everyone calls you a “godfather.”

Hwang In-hoon: When the godfather says anything, anything happens; when the godfather wants anything, he can get anything. Obviously, I'm not up to this level.

DIEVGEN: As far as artificial intelligence is concerned, we all know Nvidia is at the cutting edge of innovation in this industry. What do you think the big model will look like in the next five years? How will the data center architecture change? What are your thoughts on what's next? It's been a long journey, but how do you think artificial intelligence will evolve over the next five years?

Huang Renxun: Let's take a step back; in fact, your keynote speech just now was probably the most important part. For example, it highlights changes in underlying computer technology. The reason is, of course, that as you know, Cadence and computer technology have made each other better. A fundamental shift in the underlying computing platform, which is the foundation of Cadence and the foundation of all industries that rely on Cadence, every industry.

In your keynote address just now, you very clearly emphasized the many benefits that accelerated computing brings to the digital twin platform Millennium. Once accelerated computing is adopted, generative artificial intelligence may become a reality. Generative AI will be difficult to achieve without a transition to accelerated computing.

The advantage of switching to accelerated computing is that in the past it was very difficult to use CPU expansion, but using accelerated computing can bring an X factor of 1000 times, in addition to that, there is also a factor of 30 times. When generative artificial intelligence is added, there is an X factor of 100,000 times more on top of this. Some of the things you mentioned at the beginning were really great. You say the design tool only completes the process once, but what the designer wants to do is explore the multi-dimensional modality multiple times. There is no right answer to this question, only the best answer. We need to explore thousands of different fields, and of course, thoroughly exploring the vast design space is difficult to achieve. Infinite amounts of computation can't do this, so we need artificial intelligence to help us enter specific areas of exploration and optimization, then specialize in this work using principled solvers. So we can do all kinds of different things together.

As far as accelerated computing is concerned, I think generative artificial intelligence will first change the way Cadence develops software, and it will also change the way we use software, which comes first. In addition to being able to do a good job, I think there are a few other benefits. We use Cadence to design our circuits, chips, PCBs, systems, and now our data centers. We use Cadence products for circuit design, logic design, system design, simulation, verification, formal verification, and extend to liquid cooling systems, etc.

The design space is no longer a chip or system, but a common design that runs through the whole thing. Millennium is a great example, because essentially, Cadence is a collaborative design company, and Nvidia is also a collaborative design company. You have to innovate throughout the process, so you changed Cadence from a chip design EDA company to an EDA SDA company, which I think is very visionary and very necessary. In fact, this is exactly how we work with Cadence and how we design our systems.

People are starting to focus on a few areas. In fact, your keynote was absolutely amazing. I recommend everyone read it a few more times because of its huge amount of content. One of the really far-reaching areas you mentioned is that by investing in accelerated computing, artificial intelligence, data centers, etc., we can design better and more energy-efficient products. Now keep in mind that you designed a chip once, but you can ship it a trillion times faster; you built a data center that can save 6% of electricity, which can be used by 1 billion people throughout the day. Therefore, by designing better software, chips, and systems, the energy we save the world will have permanent benefits for society. On the one hand, artificial intelligence consumes more electricity and data centers; on the other hand, through better product design, better computers, better cars, better phones, better materials, etc., we will reduce other electricity and energy consumption by 98%.

So I think we're really at an inflection point; these are all completely true. This is a really exciting time. Your keynote really highlighted that.

DIVGAN: Thank you so much. You've been talking about this transformation. Although Nvidia has made the best chips, you're worried that people might be confused; you've probably been stressing that this transformation isn't just about making chips. At Nvidia's GTC conference this year, your keynote address involved building an entire data center, a very complete system. When you put them together, you're talking about racks and liquid-cooled data centers. For a chip company, transitioning to an entire architecture isn't easy. Nvidia has transformed into a complete software systems company, and I'm curious how you did it or what you think. This transformation is really difficult to achieve. Some system companies are trying to make chips, which is very difficult, but Nvidia has flawlessly completed this transformation from chips to systems, software, and data. I'm curious how did you do it?

Hwang In-hoon: I think the most important thing you just said was “flawless.” I've been a chip designer for a long time, and I've been doing this job my whole career. When you say the word “flawless,” I'm sure the audience actually noticed it. This was something we observed a long time ago, and it turned out to be true.

The first thing to note is that a small portion of the code in the program takes up the vast majority of the rental time. Take CFD as an example. 3% of the code takes up 99.9999% of the rental time. If that's the case, why use the exact same tools, the same instruments, and the same processor to process 90% or all of the code? Why not do something for 97% of the code and something special for the remaining 3%? By doing this, you can speed up the application by a factor of 100,000.

There are very few applications that require this advantage to be rewritten. We cleverly chose computer graphics as the first choice for highlighting computing because it is an application that requires a lot of parallel computing, which is conducive to parallel processing, and it is a very large market, growing very fast, and innovating a lot, so we chose a good market as a starting point. However, we always imagine that in addition to computer graphics, many other applications will emerge.

Accelerated computing is not the same as general purpose computing. In general computing, you can create a processor that will run all the code, which is definitely not a case of accelerated computation. You know I coined the term “accelerated computing,” and I mean that it can speed up an application. It's an application acceleration computing platform, and you have to know what application is right. In the case of Nvidia, we started by choosing computer graphics, but we also did imaging, then we did molecular dynamics (molecular dynamics). I'm excited to see your work on digital biology.

Imagine if the chip design industry was called the chip discovery industry because my engineering team would be like this: “Look what we discovered in Blackwell's architecture this year? Uh-huh, then next year it will be like a dry season, and nothing will happen.” But we'll never do that, which isn't right because biology is much more complicated. We can't develop transistors until Nvidia can use design tools; we can't shape biology until you can use design tools. You need design tools to keep up with developments in biology. So I think one of the biggest industries in the world is going to be the Cadence industry, not the 1% you said in your keynote. I think Cadence will have huge room for growth in the future.

Every industry, be it biology or the transportation industry you mentioned, all apps are different. Some relate to imaging, some to particle physics, some to fluids, and some to things like finite element grids. So algorithms are different. In fact, Cadence is a math and computer technology company, and in many ways Nvidia is also a math and computer technology company, which is why we get along so well. We're always watching the acceleration in specific fields, and over 30 years, we've accumulated DSL libraries for all these different fields based on the CUDA architecture, some for particles, some for imaging, some for artificial intelligence, etc.

DIVGEN: You gave a great presentation at this year's GTC conference. What I want to tell you is that next time you need a bigger stadium, that huge stadium doesn't have enough space this year, so maybe next time you'll choose to host the GTC conference in Las Vegas. At this year's event, you highlighted so many apps, almost every industry has horizontal support, and the impact on some industries may be huge, such as when you mentioned life sciences. Also, you've talked about robotics, autonomous driving, etc. Are there one or two industries Nvidia is involved in that you are very excited about in the short or medium term and have the greatest potential to make an impact?

Hwang In-hoon: The three industries you mentioned just happen to be the ones I'm super excited about right now. One of them is data centers or just computer technology, and the second is the automated technology you mentioned, but I can abstract this technology to robots, automated machines, and automated systems, and semi-automated systems. This is an overall category. Whether it's a car or truck, a pizza delivery robot, or a humanoid articulated robot, these systems have many commonalities. They require many sensors, and more importantly, functional safety. The way computers are designed and verified requires that the operating system is not an ordinary type of operating system, which is very important.

Artificial intelligence is widely used, and these systems will be connected to the cloud and data centers at any time, so it can update experiences, report faults and new situations, and then download new models. Arguably, I love the whole field of automated systems; it's a brand new category. The devices we all want to build in the near future will be humanoid robots. The manufacturing cost of humanoid robots is likely to be much lower. Some people think humanoid robots will sell for more than 10,000 to 20,000 US dollars. Since cheap cars currently sell for $10,000 to $10,000, why can't we buy humanoid robots within this price range?

In an environment we've designed for humans, robots may be more flexible and versatile. In the past, production lines were designed for humans, warehouses were designed for humans, and a lot of things were designed for humans. In this environment, humanoid robots are likely to be more productive. I love that, and I love turning biology into an engineering field. The scientific discovery process is indeed critical, but it is sporadic, which is why the anti-Moore's Law (Eroom's Law) is true. By the way, if we don't switch to accelerated computing, if we don't switch to artificial intelligence, the computer industry will experience anti-Moore's law. The reason is very clear. The amount of work and computation we do is growing, but the CPU expansion rate is slowing down, so our computing costs will increase rather than decrease.

Given this, we must move to accelerated computing to save electricity, time, and expenses; in any case, I think digital biology will experience a complete revival. Science and engineering are getting closer, and this is a very complex field. We have to innovate, but for the first time we have the necessary tools -- computational systems -- to help us deal with very messy algorithms on large systems. The data-driven approach merges with the basic simulation methods you mentioned earlier, and the fusion may give us an opportunity. I think these three industries are correct; their market size will all be huge; the humanoid robot market alone is big enough.

DIEVGEN: Either way, cars with autonomous driving capabilities will probably be the first robots, and humanoid robots will then become another huge market.

Hwang In-hoon: That's true.

DIVGEN: What is your opinion on the energy consumption of artificial intelligence? Data centers can certainly be optimized, but what else can we do about it?

Huang Renxun: First, the power consumption of accelerated computing is very high because there are so many integrated computers. Whatever optimization we can make to power utilization will directly translate into higher performance, which can be measured because higher work efficiency will generate more revenue or directly translate into the cost savings of purchasing smaller products with the same performance.

Artificial intelligence can actually help people save energy. Had it not been for you to create your own AI models (the models we use in our tools now), we'd find a cost savings of over 6%. This wouldn't be possible without artificial intelligence. So you've invested in model training, and millions of engineers like us will benefit; over the next few decades, billions of people will enjoy the savings, which is how you think about costs. The way to think about costs is not just on a case-by-case basis, but from a healthcare perspective vertically. You have to consider the cost and energy savings and the impact on climate change vertically across the entire span, not just the products you're making, but the products you're designing. From a vertical perspective, artificial intelligence will revolutionize the way we respond to climate change, help use less energy, improve energy efficiency, and more.

DIVGEN: You have a very unique management style. Many of today's attendees are engineers, managers, and managers; do you have any suggestions for this?

Huang Renxun: How to turn the core ideas of management systems and leadership philosophy into action, that is, you are willing to create conditions so that outstanding people can work for the rest of their lives. This is the management philosophy I agree with. So the question is what can we do to create the conditions so that people can do their jobs for the rest of their lives.

I think one of the most important aspects is getting them information, so I don't think any decisions I make just need one person to hear, or that I need to privately tell individuals some information because no one is worth or can hear it. I tend to do most of my work in a big environment where diverse teams of experts and contributors come together and we just solve problems.

In addition to the transparency of the challenges the company faces and the information people deserve, I love to reason in front of people and suggest directions based on good reasoning. By forcing myself to reason, I'm doing two things: one is to influence others; the second is to teach others. I think fully empowering employees is one of the reasons Nvidia is so small. We only have 28,000 employees, but our size is huge because almost everyone has the right to make rational decisions on my behalf.

Finally, the attributes of your organization should reflect the products you manufacture. Nvidia is a full stack technology company. We have sufficient personnel and it was entirely co-designed. Co-design means you shouldn't just work with hardware teams, or just software teams. You should collaborate on all aspects at the same time because you're working together on a design. So I'm trying to create an environment where experts and contributors from every level of the company can participate in solving problems at the same time. These are my management principles.

edit/lambor

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment