Nvidia, Oracle to build 7 supercomputers for Department of Energy, including its largest ever

Nvidia, Oracle to build 7 supercomputers for Department of Energy, including its largest ever

Summary

The US Department of Energy has partnered with Nvidia and Oracle to build seven new AI-focused supercomputers to speed scientific research and to develop agentic AI for discovery. Two Argonne systems — Solstice (planned 100,000 Blackwell GPUs) and Equinox (10,000 Blackwell GPUs) — will be interconnected and together deliver roughly 2,200 exaFLOPs of AI compute. Equinox is expected online next year; Solstice has no public timeline. Argonne will also get three more Nvidia-based systems (Tara, Minerva and Janus) that will be available to external researchers. Separately, Los Alamos National Laboratory will receive two Vera Rubin-based systems, Vision and Mission, built by HPE on a Cray GX5000 platform and targeted for 2027. Nvidia also announced a partnership with Palantir to integrate accelerated computing, CUDA-X libraries and Nemotron models into Palantir’s AI platform.

Key Points

  • DOE, Nvidia and Oracle to build seven AI supercomputers to accelerate scientific discovery and agentic AI development.
  • Solstice is planned as a 100,000 Blackwell GPU system; Equinox will have 10,000 Blackwell GPUs — combined ~2,200 exaFLOPs.
  • Equinox expected to come online next year; Solstice timeline has not been disclosed.
  • Argonne will host three additional Nvidia-based systems (Tara, Minerva, Janus) that will be open to researchers at other institutions.
  • Los Alamos will get two Vera Rubin-based systems, Vision (unclassified workloads) and Mission (classified workloads), both due in 2027 and built by HPE.
  • Nvidia announced a Palantir partnership to bring Nvidia accelerated computing, CUDA-X and Nemotron models into Palantir’s AI platform for faster data analysis and agent-driven workflows.

Author style

Punchy: This isn’t just another vendor press release — it’s a landmark public-sector investment in AI-native supercomputing. If you follow AI infrastructure, research capacity or national security tech, the specifics here matter. Read the detail if you want the full picture.

Context and Relevance

The scale of the planned systems signals a step-change in US government-backed AI compute capacity. Multi-exaFLOP infrastructure at national labs strengthens research in healthcare, materials science, energy and security, and deepens ties between major vendors (Nvidia, Oracle, HPE) and public research. It also raises governance questions — notably how agentic AI will be validated before being put to work in high-stakes scientific and national-security settings.

Why should I read this?

Short and informal: giant GPU farms are coming to US labs and they’ll let researchers run far bigger AI experiments. We’ve pulled out the headline numbers and what they mean — saves you reading the announcement noise. If you care about who controls top-tier AI compute or where big science will be done next, this matters.

Source

Source: https://go.theregister.com/feed/www.theregister.com/2025/10/28/nvidia_oracle_supercomputers_doe/