share_log

OpenAI等数十家AI公司联合“围剿”英伟达

Dozens of AI companies including OpenAI join forces to “encircle” Nvidia

TMTPost News ·  Feb 4 17:05

Source: Titanium Media

According to Wells Fargo statistics, Nvidia currently has 98% market share in the data center AI market.

The new wave of artificial intelligence (AI) boom has triggered a huge increase in the demand for large model computing power, causing$NVIDIA (NVDA.US)$The supply of AI chips is in short supply, and the cost of computing power continues to rise, so AI giants may choose to “encircle” Nvidia.

On February 2, Beijing time, tech giant Meta Platforms confirmed that the company plans to deploy the latest self-developed custom chip in its data center this year and will coordinate work with other GPU chips to support the development of its AI big model.

Dylan Patel, founder of the research institute SemiAnalysis, said that considering the scale of Meta's operation, once self-developed chips are successfully deployed on a large scale, it is expected to save hundreds of millions of dollars in energy costs and billions of dollars in chip procurement costs every year.

Earlier, OpenAI continued to advance the “core building” program. The company's CEO Sam Altman (Sam Altman) recently visited South Korea to discuss cooperation with chip giants Samsung and SK Hynix. According to reports, one of the main topics discussed at the meeting between the two sides is high-bandwidth storage (HBM) chips. According to some sources, if cooperation is reached between Samsung and SK Hynix, they may custom-develop and produce memory chips for OpenAI.

It's not just Meta and OpenAI. According to The Information statistics, up to now, there are more than 18 chip design startups for AI big model training and inference around the world, including Cerebras, Graphcore, Wall Thread, and D-matrix. The total financing amount has exceeded 6 billion US dollars, and the total valuation of the company has exceeded 25 billion US dollars (about 179.295 billion yuan).

Investors behind these companies include Sequoia Capital, OpenAI, Wuyuan Capital, ByteDance, etc. If you add the “core making” actions of tech giants and chip leaders such as Microsoft, Intel, and AMD, the number of AI chip companies that target Nvidia will eventually exceed 20.

Today, even if it “can't kill” Nvidia, the GPU (graphics processor) chip leader with a market value of 1.63 trillion US dollars, dozens of AI companies around the world will do their best to “encircle” it.

Not only will Altman accelerate the construction of AGI, but he will also invest 10 billion dollars to “build the core”

Currently, in the field of large models, AI computing power is mainly limited by two aspects: the demand for AI model training has skyrocketed, and the cost of computing power is getting higher and higher.

The first is growing demand.

According to the service status of the OpenAI development platform, on January 29, the application programming interface (API) of the ChatGPT platform was broken. It took nearly 20 minutes, and caused the error rate of ChatGPT's responses to soar. OpenAI said this was mainly due to a surge in application visits, which in turn led to server “downtime.”

According to some sources, the ChatGPT platform currently mainly uses Nvidia A100 and H100 GPU cards, and it takes about 25,000 A100 chips for one training session. If GPT-5 were to be trained, 50,000 H100 would also be required. Market analysts believe that with the continuous iteration and upgrading of the GPT model, GPT-5 may have no “core” available in the future.

In addition to demand, the cost of computing power is also one of the core elements of AI chip procurement.

According to public data, the price of the Nvidia H100 has soared to $25,000-30,000, which means that the cost of a single ChatGPT query will increase to around $0.04. If ChatGPT's capabilities were to compete with Google Search, not only would it invest about 48.1 billion US dollars in AI chips, but it would also require about 16 billion US dollars of chips every year to maintain operation.

Today, Nvidia has become an essential key partner in AI model training.

According to Wells Fargo statistics, Nvidia currently has 98% market share in the data center AI market, while AMD's market share is only 1.2%, and Intel's market share is less than 1%.

英伟达CEO黄仁勋(Photo: SCMP)
Nvidia CEO Huang Renxun (Photo: SCMP)

According to Counterpoint Research, in 2023, Nvidia's revenue will soar to US$30.3 billion, an increase of 86% over US$16.3 billion in 2022, and rise to the third largest semiconductor manufacturer in the world in 2023. Wells Fargo, on the other hand, expects Nvidia to generate revenue of up to 45.7 billion US dollars in the data center market in 2024, or a record high.

Wells Fargo mentioned in the report that although AMD's AI chip revenue in 2023 is only 461 million US dollars, it is expected to grow to 2.1 billion US dollars in 2024. At that time, the market share will rise to 4.2%, and Intel may also get close to 2% of the market share, which will cause Nvidia's share to 94%.

For OpenAI, as a large model manufacturer, cost and meeting demand are two core elements, and these two points are “stuck in the neck” and constrained by Nvidia, so it's not difficult for us to understand why Altman must “make his own chip” — safer and more long-term manageable costs to reduce dependence on Nvidia.

Altman has “complained” about the shortage of AI chips many times, saying that Nvidia's current chip production capacity is insufficient to meet future needs.

At the WSJ Live event in October last year, Altman first responded to rumors that the option of developing a self-developed chip was not ruled out.

“We are still evaluating whether to use custom hardware (chips). We're working to determine how we can scale up to meet the world's needs. “We may not be developing chips, but we are maintaining good cooperation with partners that have achieved outstanding results.” Altman said.

So how does OpenAI “shape the core”? Looking at it now, Altman has three main cards: finding investors to build factories, cooperating with leading chip makers, and investing in other startups to “build cores.”

On January 20 of this year, Bloomberg reported that Altman is raising more than 8 billion US dollars with global investors such as the Middle East Abu Dhabi G42 Fund and Japan's SoftBank Group to establish a new AI chip company. The goal is to use the capital to establish a factory network to manufacture chips and directly target Nvidia. However, negotiations are still in the early stages, and the full list is uncertain.

In addition to advancing financing matters, Ultraman is also stepping up cooperation with leading chip manufacturers. On January 25, Altman met with the CEO of SK Hynix, a leading Korean memory chip company, and the chairman of SK Group, focusing on the establishment of an “AI chip alliance”. The two sides may cooperate with Samsung and SK Group in AI chip design and manufacturing.

Earlier, the Financial Times reported that before getting involved with SK and Samsung, Altman had already contacted Intel and had discussed cooperation with TSMC to establish a new chip factory.

Intel CEO Pat Gelsinger (Pat Gelsinger) announced a few days ago that Altman will attend the first IFS Direct Connect event for Intel's foundry business to be held on February 21. Altman will deliver a speech as a guest, and the two sides may announce cooperation matters.

In fact, in addition to building a factory, Altman has also invested in at least 3 chip companies. One of them is Cerebras, a well-known computing power chip company in the US.

According to reports, Cerebras has launched ultra-large chip products that have broken world records. Its second-generation AI chip, WSE-2, has reached 2.6 trillion transistors, and the number of AI cores has reached 850,000.

The second company Altman invested in was Rain Neuromorphics, a chip startup based on the RISC-V open source architecture that mimics the way the brain works. According to reports, the company can both train the algorithm and run it when deployed. Although the first hardware has yet to be delivered to customers, OpenAI has already ordered $51 million worth of chips from the company.

The last one is Atomic Semi, which was co-founded by chip giant Jim Keller and Sam Zeloof. The former was the chief architect of AMD K8 and also participated in the development of Apple's A4/A5 chips. Atomic Semi aims to simplify the chip manufacturing process and achieve rapid production with a view to reducing chip costs. In January of last year, Atomic Semi announced the completion of a round of financing under OpenAI's $100 million valuation.

As big models continue to explode, the demand for AI computing power will continue to rise. Altman has revealed that OpenAI wants to guarantee an adequate supply of chips by 2030.

“Siege” Nvidia, can the old and new defenders stand out?

In addition to OpenAI, the team “besieging” Nvidia this time mainly includes three categories:

First, chip giants such as Intel and AMD have been “friends and foes” with Nvidia for a long time, and they also know a lot about GPU technology, and now they want to “challenge” Nvidia;

Second, Internet and cloud service providers such as Meta, Amazon, and Microsoft. They are very familiar with cloud-based big model technology and hope to use this to reduce costs and reduce their dependence on Nvidia;

Third, chip design startups established in the past five years, such as companies such as Tenstorrent, Cerebras, Graphcore, Etched, Extropic, and MATX, may use the new wave of AI to drive rapid growth in sales of their products.

However, The Information is not optimistic that so many AI chip companies will continue to exist in the future.

The report points out that several AI and autonomous driving chip startups that emerged in the last round in 2015 eventually went out of business as large companies entered the market and went out of business on a large scale. “This highlights a model in the tech industry: talent and capital flows to hot places.”

Well, how companies build AI chip technology, products, and commercialization barriers is an important factor in this battle to “encircle” Nvidia.

Hwang In-hoon recently publicly stated that export controls by Huawei, Intel, and the US government have posed a serious challenge to Nvidia's dominant position in the AI chip market.

On February 1, news broke that the three HGX H20 Chinese “special edition” chips developed by Nvidia in response to the latest US export control regulations will be pre-ordered and delivered in small batches in the first quarter of 2024, and mass delivery is expected to begin in the second quarter. According to reports, the performance of the H20 chip is less than 20% of the H100, while the overall price of the H20 is priced at 12,000 US dollars to 15,000 US dollars per card, and the final sales price is about 110,000 yuan (about 15,320 US dollars), which is basically the same as the price of the Huawei Ascend 910B.

Also, according to the dealer's public information, the Nvidia H20 server with 8 AI chips sells for 1.4 million yuan. In contrast, a server using 8 H800 chips sold for around 2 million yuan last year.

For specific new AI chip products to be released by Nvidia, companies such as Alibaba, Tencent, and Xinhua began testing as early as November 2023. However, according to multiple sources, these companies may order far fewer chips from Nvidia this year than previously planned.

It is becoming more and more difficult for Nvidia to sell high-end graphics cards in China, and more manufacturers are “catching up” later, such as OpenAI's “core building.” Therefore, as competition in the AI chip market intensifies, who will stand out from this wave of AI hype and become the winner among the new and old defenders, and how this important market changes in the future will all be worth looking forward to.

“Many people think that computing power will rapidly decline, and the business model will be established as computing power declines, but we judge that the cost of computing power will not decrease in the future, and the cost of computing power will continue to be high for a long time.” Li Di, CEO of Xiaobing Company, said recently.

editor/tolk

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment