share_log

Tesla CEO Elon Musk Says He Feels 'Like A Wince' When Saying 'GPU' As He Boasts Rapid Progress With AI Compute: 'Need A New Word'

Benzinga ·  Apr 24 15:39

Elon Musk, the CEO of Tesla Inc. (NASDAQ:TSLA), has expressed his dissatisfaction with the term GPU while announcing that the company's core AI infrastructure is no longer training-constrained.

What Happened: During the first-quarter earnings call on Tuesday, Musk revealed that Tesla has been actively expanding its core AI infrastructure. He stated, "We are, at this point, no longer training-constrained, and so we're making rapid progress."

The tech billionaire also disclosed that Tesla has installed and commissioned 35,000 H100 computers or GPUs and Tesla anticipates this number to potentially reach around 85,000 by the end of the year, primarily for training purposes.

"We are making sure that we're being as efficient as possible in our training," Musk said, adding that it is not just about the number of H100s but "how efficiently they're used."

During the conversation, Musk also expressed his discomfort with the term GPU. "I always feel like a wince when I say GPU because it's not. GPU stand — G stands for graphics, and it doesn't do graphics," the tech mogul stated.

"GPU is [the] wrong word," he said, adding, "They need a new word."

Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.

Why It Matters: Musk's statement came after Tesla reported its first-quarter financial revenue of $21.0 billion, which showed a 9% year-over-year decrease, missing the Street consensus estimate of $22.15 billion. The company stated that its revenue was affected by reduced average selling prices and decreased vehicle deliveries during the quarter.

On the other hand, Nvidia Corporation (NASDAQ:NVDA) last year made a significant impact on the AI and computing sectors by introducing its H100 data center chip, which added more than $1 trillion to the company's overall value.

In February, earlier this year, it was reported that the demand for the H100 chip, which is four times faster than its predecessor, the A100, in training large language models or LLMs and 30 times faster in responding to user prompts, has been so substantial that customers are encountering wait times of up to six months.

Meanwhile, earlier this month, Piper Sandler analyst Harsh V. Kumar engaged directly with Nvidia's management team and reported that despite Nvidia's Hopper GPU being on the market for almost two years, demand remains strong, outstripping supply. Customers are hesitant to shift their orders from the Hopper to the Blackwell, fearing extended wait times due to anticipated supply limitations.

Check out more of Benzinga's Future Of Consumer Tech by following this link.

Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.

Photo via Shutterstock

The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment