share_log

Nvidia CEO Jensen Huang Praises Elon Musk For Achieving Something With XAI In 19 Days That Usually Takes At Least A Year: 'Singular In His Understanding Of Engineering'

Nvidia CEO Jensen Huang Praises Elon Musk For Achieving Something With XAI In 19 Days That Usually Takes At Least A Year: 'Singular In His Understanding Of Engineering'

英偉達CEO黃仁勳讚揚埃隆·馬斯克用XAI在19天內取得了通常需要至少一年時間才能完成的成就:「在工程方面具有獨到的理解」
Benzinga ·  10/14 15:16

In an episode of the Bg2 Pod, Nvidia Corporation's (NASDAQ:NVDA) CEO Jensen Huang shared his thoughts on a variety of subjects, including Tesla and SpaceX CEO Elon Musk's xAI.

在Bg2播客節目中,英偉達股份有限公司(NASDAQ:NVDA)的首席執行官Jensen Huang分享了他對各種主題的看法,包括特斯拉和spacex首席執行官埃隆·馬斯克的xAI。

What Happened: The podcast posted on Sunday features a discussion between Altimeter Capital's CEO Brad Gerstner and partner Clark Tang with Huang.

發生了什麼:週日發佈的播客節目中,Altimeter Capital的首席執行官Brad Gerstner和合夥人Clark Tang與Huang進行了討論。

During the conversation, the Nvidia CEO was asked about xAI's achievement of constructing a large coherent supercluster in Memphis in a matter of months. "

在談話過程中,英偉達首席執行官被問及xAI在幾個月內構建了一個規模龐大的連貫超級集群的成就。

"Elon is singular in this understanding of engineering and construction and large systems and marshaling resources," he said.

「埃隆在工程和建築,大型系統和調動資源方面的理解是獨一無二的,」他說。

The Nvidia CEO also praised the engineering, networking, and infrastructure teams at xAI, stating that the integration of technology and software was "incredible."

英偉達首席執行官還讚揚了xAI的工程、網絡和基礎設施團隊,稱科技和軟件的整合是「 incredible」。

"Just to put in perspective, 100,000 GPUs that's you know easily the fastest supercomputer on the planet as one cluster. A supercomputer that you would build would take normally three years to plan. And then they deliver the equipment and it takes one year to get it all working," Huang stated, adding, "We're talking about 19 days."

「僅舉一個例子,100,000個GPU,這就是你知道的輕而易舉地成爲地球上最快的超級計算機之一的集群。您通常要花3年時間計劃構建一個超級計算機。然後他們交付設備,需要一年時間將所有工作都搞定,」 黃先生說,「我們談論的是19天。」

Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.

訂閱Benzinga Tech Trends電子報,獲取最新技術動態。

Why It Matters: In July earlier this year, xAI initiated the training of the Memphis Supercluster with 100,000 Nvidia H100 GPUs, making it the most powerful AI training cluster in the world.

爲什麼這很重要:今年早些時候7月,xAI開始使用100,000個英偉達H100 GPU來訓練Memphis超級集群,使其成爲世界上最強大的AI訓練集群。

Previously, it was reported that Musk and Oracle's Larry Ellison had implored Huang for additional GPUs during a dinner meeting.

此前報道稱,馬斯克和甲骨文的拉里·埃裏森在晚餐會上懇求黃偉文提供額外的GPU。

This discussion on the Bg2 Pod further highlights the strong relationship between Musk and Huang, which was evident when Musk praised Huang's work ethic earlier in July.

Bg2 Pod上的這次討論進一步突顯了馬斯克和黃偉文之間的密切關係,這在7月份馬斯克表揚黃偉文的職業道德時就可見一斑。

Earlier, Huang has also voiced his appreciation for Musk's efforts, especially in the area of self-driving vehicles.

此前,黃偉文還表示讚賞馬斯克在自動駕駛汽車領域所做出的努力。

Check out more of Benzinga's Consumer Tech coverage by following this link.

請點擊此鏈接查看更多有關Benzinga的消費科技報道。

  • Nvidia's Blackwell Chip Faces AMD's MI350 Challenge In 2025: CEO Lisa Su Says, 'Beginning, Not The End Of The AI Race'
  • 英偉達的Blackwell芯片將在2025年面臨AMD的MI350挑戰:首席執行官Lisa Su表示,「開端,而非終點,人工智能競賽的開端。」

Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.

免責聲明:本內容部分使用Benzinga Neuro製作,並由Benzinga編輯審核和發佈。

譯文內容由第三人軟體翻譯。


以上內容僅用作資訊或教育之目的,不構成與富途相關的任何投資建議。富途竭力但無法保證上述全部內容的真實性、準確性和原創性。
    搶先評論