share_log

赛道Hyper | 联想新服务器将搭载英伟达Blackwell

Track Hyper | Lenovo's new server will be powered by nvidia Blackwell

wallstreetcn ·  08:04

The ThinkSystem SC777 is equipped with the Blackwell Al accelerator card.

Author: Zhou Yuan / Wall Street See

At the 2024 Tech World, Lenovo Group's Chairman and CEO, Yang Yuanqing, and NVIDIA CEO, Huang Renxun, jointly announced that the SC777 model in the Lenovo ThinkSystem server series will be equipped with the NVIDIA Blackwell AI accelerator card (GPU).

On March 18th this year, at GTC (GPU Technology Conference) 2024, Lenovo Group and NVIDIA announced a collaboration to launch a new hybrid AI solution, helping enterprises and cloud providers gain the crucial accelerated computing capabilities needed for success in the AI era, turning AI from concept to reality.

At the same time, in terms of large-scale and efficient enhanced AI workloads, Lenovo at that time introduced an extension of the ThinkSystem AI product portfolio, including two 8-way NVIDIA GPU systems.

Lenovo's ThinkSystem server series is Lenovo's datacenter infrastructure product line, including various models mainly targeting different enterprise applications and services. This series currently has known models classified into SC and SR.

Among these, SR has launched various types of products, but SC currently only has the SC777. Key features include supporting large-scale computing clusters, excellent scalability, and configurability, making it suitable for various enterprise scenarios.

From high-performance computing in datacenters to edge computing scenarios, the flexible architecture and excellent energy efficiency of the Lenovo ThinkSystem SC777 allow it to adapt to various dynamically changing business needs. Moreover, the security design of this server is also outstanding.

The ThinkSystem SC777 server can quickly run complex AI training, image processing, video analysis tasks, and quickly adapt to different workload requirements through highly flexible configurations.

Blackwell is the new generation of AI chips and supercomputing platform launched by NVIDIA, named after the American mathematician David Harold Blackwell. The GPU architecture has 208 billion transistors and is manufactured using a custom taiwan semiconductor 4NP process. All Blackwell products use double photolithography limit size wafers connected into a unified GPU through 10 TB/s inter-chip connectivity technology.

The second-generation Transformer engine combines custom Blackwell Tensor Core technology with NVIDIA TensorRT-LLM and NeMo framework innovations to accelerate large language models (LLMs) and expert mixed models (MoEs) for inference and training.

In order to effectively boost the inference of MoE models, the Blackwell Tensor Core has added new precision (including a new community-defined microscale format) to provide higher accuracy and easier replacement of larger precision.

The Blackwell Transformer engine uses fine-grained scaling technology called microtensor scaling to optimize performance and accuracy, supporting 4-bit floating point (FP4) AI. This doubles the performance and size of new generation models that memory can support while maintaining high precision.

Blackwell features NVIDIA confidential computing technology to protect sensitive data and AI models from unauthorized access through hardware-based strong security. It is the industry's first GPU with Trusted Execution Environment (TEE) I/O capabilities, providing superior confidential computing solutions in conjunction with hosts with TEE-I/O capabilities, while offering real-time protection through NVIDIA NVLink technology.

Overall, the Blackwell GPU is NVIDIA's next-generation core platform for accelerated computing and generative artificial intelligence (AI), featuring a new architecture design and six transformative accelerated computing technologies.

These technologies will drive breakthroughs in fields such as data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing, and generative AI. Particularly noteworthy is that its AI inference performance has improved 30 times compared to the previous generation products, while energy consumption has reduced by 25 times, marking a significant advancement in the AI and computing fields.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment