SGH's Penguin Solutions Selected As The Managed Services Partner For Voltage Park's NVIDIA Clusters; Under The Architecture Being Deployed, Each Compute Node Instance Features Eight Nvidia H100 Tensor Core GPUs For A Total Of 640GB Of GPU Memory Per Node
SGH's Penguin Solutions Selected As The Managed Services Partner For Voltage Park's NVIDIA Clusters; Under The Architecture Being Deployed, Each Compute Node Instance Features Eight Nvidia H100 Tensor Core GPUs For A Total Of 640GB Of GPU Memory Per Node
Large-scale AI cloud service provider is making 24,000 GPUs available for on-demand users via innovative exchange platform as well as for long term rentals
大型人工智能雲服務提供商通過創新交易平台向按需用戶提供了24,000個GPU,同時也可提供長租公寓。
Penguin Solutions, an SGH brand (NASDAQ:SGH), today announced it has been selected by Voltage Park to provide core professional and managed services and software for its large-scale NVIDIA-based clusters. Penguin designs, builds, deploys, and manages AI and accelerated computing infrastructures at scale. Voltage Park is a next-generation cloud company focused on providing accessible machine learning (ML) infrastructure for AI.
SGH旗下品牌企鵝解決方案(NASDAQ:SGH)今天宣佈,它已被Voltage Park選中爲其基於NVIDIA的大規模集群提供核心專業和託管服務和軟件。企鵝設計、構建、部署和管理規模化的人工智能和加速計算基礎設施。Voltage Park 是一家專注於爲人工智能提供可訪問機器學習基礎設施的下一代雲公司。
This press release features multimedia. View the full release here:
本新聞稿涵蓋多媒體功能。在此處查看完整新聞稿:
Voltage Park's cloud environment is one of the more significant ML compute infrastructures in the world. Under the architecture being deployed, each compute node instance features eight NVIDIA H100 Tensor Core GPUs for a total of 640GB of GPU memory per node. A high-performance, low-latency fabric built with NVIDIA InfiniBand Networking ensures workloads can scale across clusters of interconnected systems, allowing multiple instances to act as one massive GPU to meet the performance requirements of advanced AI training. High-performance storage is also being integrated to provide a complete solution for AI supercomputing.
Voltage Park的雲環境是世界上較重要的機器學習計算基礎設施之一。在正在部署的架構下,每個計算節點實例都配備了8個NVIDIA H100 Tensor Core GPU,總共每個節點640Gb的GPU存儲器。使用NVIDIA InfiniBand 網絡建造的高性能、低延遲的佈局保證了工作負載可以跨越互連繫統集群擴展,從而允許多個實例作爲一個大規模GPU來滿足高級人工智能培訓的性能要求。高性能存儲也正在整合,爲人工智能超級計算提供完整的解決方案。
譯文內容由第三人軟體翻譯。