share_log

英伟达H100:助力股价狂飙的人工智能之“芯”

Nvidia H100: the “core” of artificial intelligence that helps stock prices soar

Golden10 Data ·  Apr 22 21:36

Source: Golden Ten Data

Nvidia's technology has few rivals when it comes to training artificial intelligence, and the H100's successor is probably the H200, then the B100.

Computer components aren't usually thought to change entire businesses or industries, but a GPU released by Nvidia (NVDA.O) in 2023 did just that. The H100 data center chip added more than $1 trillion in value to Nvidia and made the company overnight the king of artificial intelligence. It showed investors that the buzz surrounding generative artificial intelligence is turning into real benefits, at least for Nvidia and its most important suppliers. The H100 is in such high demand that some customers have to wait up to six months to receive it.

1. What is Nvidia's H100 chip?

The H100 is a graphics processor named in honor of Grace Hopper (Grace Hopper), a pioneer in computer science. It's a more powerful chip, commonly used in personal computers, to help gamers get the most realistic visual experience. However, after optimization, it can process large amounts of data and computation at high speed, so it is ideal for the energy-intensive tasks of training artificial intelligence models. Nvidia, founded in 1993, was a pioneer in this market, and its investment dates back almost 20 years, when it was betting that the ability to work in parallel would one day make its chips play an important role in applications other than gaming.

2. What makes the H100 so special?

By training large amounts of existing data, generative artificial intelligence platforms learn to complete tasks such as translating texts, summary reports, and synthesizing images. The more they read, the better they are at recognizing human language or writing cover letters. They improve their abilities through constant trial and error, and consume a great deal of computational power in the process. Nvidia claims that in terms of training these so-called large language models (LLMs), the H100 is four times faster than its predecessor, the A100, and 30 times faster in responding to user prompts. This performance advantage is critical for companies that are anxious to train LLMs to perform new tasks.

3. How did Nvidia become a leader in artificial intelligence?

The Santa Clara, California company is a global leader in graphics chips, the part of a computer that generates the image you see on the screen. The most powerful graphics chip has hundreds of processing cores, which can simultaneously perform multi-threaded calculations and simulate complex physical phenomena such as shadows and reflections. Nvidia engineers realized in the early 2000s that they could re-equip other applications with graphics accelerators by dividing tasks into smaller pieces and processing them simultaneously. Just over a decade ago, artificial intelligence researchers discovered that by using this kind of chip, their work could finally be put into practical use.

4. Does Nvidia have a real competitor?

Nvidia controls around 80% of the accelerator market in artificial intelligence data centers operated by Amazon, Alphabet, and Microsoft. These companies are working internally to build their own chips, as well as AMD.

5. How is Nvidia ahead of the competition?

Nvidia quickly updates its products, including hardware-enabled software, at a speed no other company can match. The company has also designed various cluster systems to help customers buy the H100 in bulk and quickly deploy it. Chips such as the Intel (INTC.O) Xeon processor can perform more complex data operations, but they have fewer cores and are much slower when processing large amounts of information commonly used to train artificial intelligence software. Nvidia's data center division increased revenue by 81% in the last quarter of 2023 to reach $22 billion.

6. How do AMD and Intel compare to Nvidia?

AMD, the second-largest manufacturer of computer graphics chips, launched a version of the Instinct series in June, mainly targeting markets dominated by Nvidia products. AMD CEO Sulisa told the audience at an event in San Francisco that the chip, called the MI300X, has more memory and can handle generative artificial intelligence workloads. “We are still in the early stages of the AI lifecycle,” she said in December. Intel is introducing specific chips for artificial intelligence workloads to the market, but admits that demand for graphics chips in data centers is currently growing faster than demand for processors, and processors have always been Intel's strong point. Nvidia's strength is not limited to hardware performance. The company invented a graphics chip language called CUDA, which can program graphics chips to complete the basic work of artificial intelligence programs.

7. What products does Nvidia plan to release next?

Later this year, the H100 will pass the torch to its successor, the H200, and then Nvidia will make more substantial changes to the design and launch the B100 model. CEO JensenHuang (JensenHuang) has been an ambassador for this technology. He is trying to get the government and private companies to buy it as soon as possible, otherwise they risk being left behind by companies that have embraced artificial intelligence. Nvidia also knows that once customers choose its technology for generative AI projects, it will be easier to sell upgraded products to them than competitors who want to attract users.

edit/lambor

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment