Using nvidia's CUDA-Q platform, Google can utilize 1024 H100 Tensor core GPUs on the nvidia Eos supercomputer to perform the world's largest and fastest quantum device dynamics simulation at a very low cost, capable of comprehensive simulation of a 40-qubit device. A noise simulation that originally took a week can now be completed in just a few minutes.
Nvidia and Google are joining forces to accelerate Google's quantum computer processor design.
On Monday, November 18th, Eastern Time, Nvidia announced a collaboration with Google Quantum AI, a team developing quantum computing software and hardware tools at Google, to speed up the design of Google's next-generation quantum computing devices using simulations supported by Nvidia's CUDA-Q platform.
Google Quantum AI is using a mixed quantum-classical computing platform and Nvidia's Eos supercomputer to simulate the physical characteristics of its quantum processor. This will help overcome the current limitations of quantum computing hardware. Due to what researchers call 'noise,' quantum computing hardware can only perform a certain number of quantum operations before the calculations have to stop.
China Science and Technology News previously reported noise as the 'danger of quantum processors.' Quantum processors are very sensitive to noise, and even the slightest interference, such as scattered photons generated by heat, random signals from surrounding electronic devices, or physical vibrations, can quickly destroy quantum superposition states, seriously affecting the accuracy of quantum computers.
When announcing the collaboration with Nvidia on Monday, Google Quantum AI's research scientist Guifre Vidal said:
"Only by expanding the scale of quantum hardware while controlling noise can we develop commercially viable quantum computers. By leveraging Nvidia's accelerated computing, we are exploring the impact of noise on designing increasingly larger quantum chips."
To understand noise in quantum hardware design, complex dynamic simulations are needed to fully capture how quantum bits in a quantum processor interact with their environment. These simulations are often very costly in terms of computation. Nvidia has stated that through its CUDA-Q platform, Google can utilize 1024 H100 Tensor core GPUs on Nvidia's Eos supercomputer to perform one of the world's largest and fastest dynamic simulations of a quantum device at extremely low cost.
With Nvidia's CUDA-Q and H100 GPU, Google can conduct comprehensive and realistic simulations on devices capable of accommodating 40 quantum bits. This is the largest-scale simulation ever performed in its class. The simulation technology provided by CUDA-Q means that noise simulations that originally took a week can now be completed in minutes.
Nvidia publicly offers software support for the aforementioned accelerated dynamic simulations on the CUDA-Q platform, enabling quantum hardware engineers to quickly expand their system designs.
Nvidia's Quantum and HPC Director, Tim Costa, stated: "The AI supercomputing capabilities will aid in the success of quantum computing. Google's use of the CUDA-Q platform has demonstrated its role in advancing quantum computing to solve real-world problems. GPU-accelerated simulations play a core role."