share_log

一个信号!苹果官宣:在谷歌TPU上训练其AI模型

A signal! Apple officially announced that it will train its AI model on Google's TPU.

cls.cn ·  08:58

Apple said its AI model was trained on a custom chip made by Google, indicating that large tech companies are looking for alternatives to Nvidia for training cutting-edge AI models.

On Monday, Apple released a technical paper stating that two AI models that support its AI system, Apple Intelligence, were pre-trained on a cloud chip designed by Google.

Analysts pointed out that Apple's decision to rely on Google's cloud infrastructure is a significant signal because no company currently surpasses Nvidia in producing AI processors, where Nvidia holds approximately 80% of the market share.

This indicates that large tech companies are looking for alternatives to Nvidia for training cutting-edge AI models.

In the aforementioned technical report, Apple detailed the use of Google's self-developed Tensor Processing Units (TPU) for training. In addition, Apple also released a preview version of Apple Intelligence for some devices on Monday.

Apple said its Apple Foundation Model (AFM) and AFM server were trained on a "cloud TPU cluster", meaning that Apple rented servers from cloud service providers for computing.

"This system enables us to train AFM models efficiently and scalably, including AFM devices, AFM servers, and larger models," the company wrote in the report.

Specifically, Apple used 2,048 TPUv5p chips, the most advanced TPU that was first released last December, to build AI models that can run on iPhones and other devices (i.e., device-side AFM). In the AFM server, Apple deployed 8,192 TPUv4 processors.

Furthermore, in the aforementioned report, Apple did not explicitly state that it did not use Nvidia's chips, but it did not mention Nvidia's hardware in its description of the hardware and software infrastructure of its AI tools and features.

Unlike Nvidia, which sells its chips and systems as standalone products, Google sells the right to use TPUs through its Google Cloud platform. Interested customers must build software through Google's cloud platform to use these chips.

According to Google's website, the latest TPU costs less than $2 per hour when ordered for a period of three years. Google first released TPUs for internal workloads in 2015 and opened them to the public in 2017. They are now one of the most mature custom chips designed for AI.

Furthermore, Google is also one of Nvidia's major customers, using both Nvidia's GPUs and its own TPUs for training AI systems and selling the right to use Nvidia's technology on its cloud.

Apple's engineers stated in the paper that using Google's chips could produce larger and more complex models than the two models discussed in the paper. The company also said earlier that the inference process (i.e., using pre-trained AI models to generate content or make predictions) will be partly performed on its in-house chips in data centers.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment