share_log

A powerful collaboration! NVIDIA partners with collaborators to build the next-generation high-efficiency gigawatt AI factory for Vera Rubin.

NVIDIA Official Website ·  Oct 14, 2025 05:22

At the Open Compute Project Global Summit, NVIDIA outlined the future blueprint for gigawatt-scale AI factories.

At 10:13 AM Eastern Time on October 13,$NVIDIA (NVDA.US)$The technical specifications of the NVIDIA Vera Rubin NVL144, an MGX generational open architecture rack server, were officially announced. Additionally, more than 50 MGX partners are preparing for this product and will also provide ecosystem support for NVIDIA Kyber.

Approximately 20 industry partners showcased next-generation core technologies and components at the summit, including new chips, components, power systems, and support capabilities for 800-volt direct current (VDC) data centers in the gigawatt era. These technologies will all support NVIDIA's Kyber rack architecture.

  • Foxconn provided a detailed introduction to its 40-megawatt Kaohsiung-1 Data Center currently under construction in Taiwan, which is specifically designed to be compatible with an 800-volt DC system.

  • $CoreWeave (CRWV.US)$、Lambda、$NEBIUS (NBIS.US)$$Oracle (ORCL.US)$Industry pioneers such as TogetherAI are also advancing the design of 800-volt DC data centers.

  • $Vertiv Holdings (VRT.US)$They released the 800-volt DC MGX reference architecture, which balances spatial efficiency, cost-effectiveness, and energy efficiency—a complete power and cooling infrastructure framework.

Compared to traditional 415-volt or 480-volt AC (VAC) three-phase systems, migrating to an 800-volt DC infrastructure offers multiple advantages for data centers: enhanced scalability, superior energy efficiency, reduced material consumption, and greater performance capacity. The electric vehicle and solar industries have already adopted 800-volt DC infrastructure, leveraging these core benefits.

  • $Hewlett Packard Enterprise (HPE.US)$They announced product support for NVIDIA Kyber and backing for NVIDIA Spectrum-XGS Ethernet at-scale technology (a component of the Spectrum-X Ethernet platform).

  • $Meta Platforms (META.US)$ The Open Compute Project (OCP), initiated by industry leaders, has brought together hundreds of computing and network service providers in an industry alliance. Its core objective is to redesign hardware technologies to efficiently meet the growing demands of computing infrastructure.

Vera Rubin NVL144: Engineered for Scaling AI Factories

The NVIDIA Vera Rubin NVL144 MGX compute tray features a high-efficiency design that supports 100% liquid cooling and offers modular capabilities. Its core mid-plane printed circuit board replaces traditional cable connections, accelerating assembly and enhancing maintainability. Additionally, it includes modular expansion slots compatible with NVIDIA ConnectX-9 800GB/s networking equipment and NVIDIA Rubin CPX (suitable for large-scale contextual inference scenarios).

NVIDIA Vera Rubin NVL144 achieves significant breakthroughs in accelerated computing architecture and AI performance, specifically designed to meet the demands of advanced inference engines and AI agents.

The product’s core design is based on the MGX rack architecture, with over 50 MGX system and component partners planning to support it. NVIDIA intends to submit the upgraded rack design and compute tray innovations as open standards to the OCP Consortium.

The standardized design of its compute trays and racks allows partners to combine components modularly, enabling faster scalability. The Vera Rubin NVL144 rack design highlights two key features: support for 45°C high-efficiency liquid cooling and the inclusion of a new liquid-cooled bus bar to enhance performance, along with an integrated energy storage system with 20 times the capacity to ensure stable power supply.

Upgrades to the MGX architecture in compute tray and rack designs not only improve AI factory performance but also streamline assembly processes, facilitating rapid deployment of gigawatt-scale AI infrastructure.

As a key contributor across multiple generations of hardware products, NVIDIA has long been involved in OCP standardization efforts. For instance, critical aspects of the NVIDIA GB200 NVL72 system’s electromechanical design stem from its technical contributions. The same MGX rack size will support not only the GB300 NVL72 but also future compatibility with Vera Rubin NVL144, Vera Rubin NVL144 CPX, and Vera Rubin CPX, balancing high performance with swift deployment needs.

Planning for the Future: NVIDIA Kyber Rack Server Generational Upgrade

The OCP ecosystem is also preparing for NVIDIA Kyber — a product that introduces multiple innovations in 800-volt DC power supply, liquid cooling technology, and mechanical design.

These innovations will lay the foundation for generational upgrades in rack servers, driving the scaled adoption of NVIDIA Kyber (the successor to NVIDIA Oberon) by 2027. By then, each Kyber rack will house a high-density platform accommodating 576 NVIDIA Rubin Ultra GPUs.

The most effective way to address high-power distribution challenges is to increase voltage. Transitioning from traditional 415-volt or 480-volt three-phase AC systems to an 800-volt DC architecture offers multiple advantages: rack server partners can upgrade internal components from 54-volt DC to 800-volt DC for enhanced performance; meanwhile, DC infrastructure providers, power systems and cooling partners, as well as chip manufacturers, have formed an ecosystem alliance that adheres to the open standards of the MGX rack server reference architecture and attended this summit.

NVIDIA Kyber is designed to increase rack GPU density, scale network capacity, and maximize the performance of large-scale AI infrastructures. By vertically arranging compute blades (similar to books on a shelf), Kyber accommodates up to 18 compute blades per chassis; simultaneously, a custom-designed NVIDIA NVLink switch blade is seamlessly integrated into the rear of the chassis via a cable-free backplane, enabling effortless network scalability.

In an 800-volt DC system, copper cables of the same specifications can transmit over 150% more power, eliminating the need for copper busbars weighing 200 kilograms to power a single rack.

In the coming years, Kyber will become a core component of hyperscale AI data centers, delivering exceptional performance, efficiency, and reliability for the most advanced generative AI workloads. With NVIDIA Kyber racks, customers can reduce copper usage by several tons, saving millions of dollars in costs.

NVIDIA NVLink Fusion Ecosystem Continues to Expand

Beyond hardware, the adoption of NVIDIA NVLink Fusion technology is accelerating. This technology enables companies to seamlessly integrate semi-custom chips into highly optimized and widely deployed data center architectures, reducing complexity and shortening time-to-market.

$Intel (INTC.US)$ Samsung Foundry has joined the NVLink Fusion ecosystem, which already includes custom chip designers, CPU, and IP partners. The expansion of the ecosystem will help AI factories scale rapidly to handle intensive workloads such as model training and agent-based AI inference.

As part of the recent collaboration announced between NVIDIA and Intel, Intel will develop x86 CPUs that can be integrated into NVIDIA’s infrastructure platform via NVLink Fusion.

Samsung Foundry and NVIDIA have partnered to meet the growing demand for custom CPUs and custom XPUs, providing end-to-end support for custom chips from design to manufacturing.

Open Ecosystem Collaboration: Scaling the Development of Next-Generation AI Factories

More than 20 NVIDIA partners are developing rack servers through open standards to support the deployment of future gigawatt-scale AI factories. Specific collaborating companies span the following areas:

Chip Providers: AOS, EPC, MPS, onsemi, Renesas, Richtek, ROHM,$Analog Devices (ADI.US)$$INFINEON TECHNOLOG (IFNNY.US)$$INNOSCIENCE (02577.HK)$$Navitas Semiconductor (NVTS.US)$$Power Integrations (POWI.US)$$STMicroelectronics (STM.US)$$Texas Instruments (TXN.US)$

Power system components provider: BizLink, Delta, LeadWealth, LITEON,$Flex Ltd (FLEX.US)$$GE Vernova (GEV.US)$$Shenzhen Megmeet Electrical (002851.SZ)$

Data Center Power System Providers: ABB, Mitsubishi Electric, Schneider Electric, Heron Power, Hitachi Energy,$Eaton (ETN.US)$$GE Vernova (GEV.US)$$SIEMENS AG (SIEGY.US)$$Vertiv Holdings (VRT.US)$

AI Portfolio Strategist!One-click insight into holdings,Fully grasp opportunities and risks.

Editor/Joryn

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment