Scientific computing is about to get a massive injection of AI
Summary
Nvidia’s Ian Buck says scientific computing is entering a rapid transition as AI is woven into high-performance computing and research workflows. He predicts widespread adoption within a year or two, with machines and software evolving to run mixed workloads: traditional FP64 simulation, low-precision AI training and inference, and even quantum-classical hybrids.
Nvidia is shipping software and models (Holoscan, BioNeMo, Alchemi, and the new Apollo family of open models) and hardware links (NVQLink for quantum integration) to enable the merger of AI and HPC. Buck insists AI won’t replace simulation — FP64 remains essential — but AI will be used to prioritise which simulations to run, making discovery far faster.
Nvidia’s hardware strategy includes specialised parts optimised for inference (Blackwell Ultra, Rubin CPX) alongside devices retaining high FP64 capability. The company reports dozens of new supercomputing contracts and cites the forthcoming TACC Horizon (2026) as an example: a mixed system offering both massive FP64 and exascale-level AI performance.
Key Points
- Ian Buck (Nvidia) expects AI to be pervasive across scientific computing workloads within one to two years.
- AI will augment — not replace — simulation: it’s a tool to predict and prioritise expensive simulations rather than an exact substitute.
- Nvidia is rolling out software frameworks and open models (Apollo, Holoscan, BioNeMo, Alchemi) to speed domain-specific research workflows.
- Quantum-classical integration (NVQLink / CUDA-Q) is part of Nvidia’s vision for future scientific systems.
- FP64 (double precision) remains crucial for scientific simulation; Nvidia will continue offering accelerators with strong FP64 capability alongside low-precision AI-optimised chips.
- Nvidia is shipping specialised inference/training devices (Blackwell Ultra, Rubin family, Rubin CPX) and will offer variants balancing precision and inference performance.
- Adoption is visible in the market: Nvidia reports over 80 new supercomputing contracts and large systems like TACC Horizon (due 2026) that blend FP64 and vast AI compute.
Context and relevance
This article explains a major industry shift: the fusion of AI and classical scientific computing. For researchers, datacentre architects and procurement teams this alters hardware selection, software stacks and long-term planning — you can’t treat AI and simulation as separate siloes any more. It also highlights how vendors (notably Nvidia) are balancing competing demands for extreme precision and extreme AI throughput, and how quantum links are being factored into next-gen designs.
Author style
Punchy: The piece cuts straight to the point — big changes are coming fast. If you work in HPC, research computing or design datacentres, the details here matter: choices about FP64 support, specialised inference silicon, and software ecosystems will shape capability and budgets.
Why should I read this?
Quick and useful — if you care about HPC or AI hardware strategy, this gives you the lowdown on where scientific computing is heading and why FP64 isn’t dead. It’s basically a heads-up so you can start thinking about mixed-precision systems, software, and procurement before everyone else does.
