Nvidia is accelerating its efforts to bridge the GPU and quantum computing realms through cuQuantum, its Tensor-capable (opens in new tab)quantum simulation toolkit. Through it, the company aims to accelerate quantum circuit simulation workloads in ways that are beyond the reach of today’s NISQ (Noisy Intermediate-Scale Quantum) systems. But for that to happen, the company is betting on further integration between quantum and classical systems towards hybrid solutions. Unsurprisingly, GPUs are at the forefront of Nvidia’s quantum developments.
Nvidia has set its sights on creating a low-latency connection that can link its GPUs – and their quantum-simulation-capable Tensor cores – with current and upcoming QPUs (Quantum Processing Units). The aim here is to take advantage of GPU’s immensely powerful parallel processing, leveraging them for quantum-specific workloads such as circuit optimization, calibration, and error correction, while lifting the communications bottleneck between quantum and classical systems.
Another element of Nvidia’s approach to quantum computing looks to offer a common software layer that’s not unlike the company’s CUDA programming model (opens in new tab).
The idea is for this programming model to significantly simplify code-level interaction with QPUs and quantum simulations, which is still done in what amounts to low-level assembly code. The aim is to streamline a quantum-geared, unified programming model and compiler toolchain (opens in new tab) that abstracts different QPUs for a more focused usage of quantum capabilities. Nvidia hopes to facilitate the transition from classical to quantum-classical workloads by allowing users to partially port their HPC (High-Performance Computing) apps to a simulated QPU, and then towards the processor itself.
According to NVidia, dozens of organizations are already leveraging its cuQuantum toolkit to support their quantum work. Amazon Web Services already offers cuQuantum integration through its Braket service (opens in new tab), showcasing a 900x speedup on quantum machine learning workloads. Other leveraging platforms Nvidia’s cuQuantum include Google’s qsim, IBM’s Qiskit Aer, Xanadu’s PennyLane, Classiq’s Quantum Algorithm Design platform. Nvidia recently achieved a world record (opens in new tab) in quantum computing simulation by leveraging its cuQuantum framework and its uber-powerful Selene supercomputer, powered by its DGX SuperPODs (opens in new tab).
Joining Nvidia’s developing cuQuantum ecosystem is Menten AI, a drug-discovery startup that aims to leverage cuQuantum’s tensor network library to simulate protein interactions and new drug molecules. The aim is to speed up drug design, whose workloads are naturally suited for the probabilistic nature of quantum computing.
“While quantum computing hardware capable of running these algorithms is still being developed, classical computing tools like NVIDIA cuQuantum are crucial for advancing quantum algorithm development,” said Alexey Galda, a principal scientist at Menten AI.
Nvidia has achieved remarkable HPC market penetration through its CUDA software stack, and it seems the company is aiming to repeat the feat for the quantum realm via cuQuantum. In what amounts to one of the most complex research fields worldwide, it certainly sounds like a streamlined software package would help accelerate the road to quantum by leaps and bounds.