HPE adds Blackwell, Rubin systems to Nvidia-backed sovereign AI push
Summary
HPE has expanded its Nvidia-based AI portfolio with systems using Nvidia’s Blackwell GPUs and upcoming Rubin accelerators, and updated its Alletra Storage MP X10000 — which it says is the first object storage platform to achieve Nvidia-Certified Storage (Foundation) validation. The vendor is supplying compute and storage for EU’s HammerHAI AI Factory, announcing new AI Factory and Supercomputing ranges, and introducing an “AI Grid” to link distributed inference sites. HPE also widened HPE Private Cloud AI features (air-gapped options, Fortanix Confidential AI certification plans), added Blackwell GPUs across ProLiant and edge offerings, and detailed timelines for rollout and support.
Key Points
- HPE is shipping systems built on Nvidia Blackwell GPUs and planning products using upcoming Rubin accelerators.
- The Alletra Storage MP X10000 gained Nvidia-Certified Storage validation for object storage at the Foundation level, validated up to 128 GPUs for AI workloads.
- HPE is building the EU HammerHAI supercomputer to support sovereign AI initiatives and meet regional data‑sovereignty requirements.
- HPE Private Cloud AI has new scale and security options: air‑gapped deployments, network expansion racks to scale to 128 GPUs, and planned Fortanix Confidential AI certification for certain ProLiant systems.
- The new HPE AI Grid connects AI factories and distributed inference clusters across regional and far‑edge sites, enabling service providers to operate thousands of distributed inference sites as a single system.
- HPE is supporting Nvidia STX rackscale references and integrating Vera Rubin accelerators, BlueField DPUs, Spectrum‑X networking, and ConnectX NICs for AI storage and networking optimisations.
- Availability: RTX PRO 4500 Blackwell support rolling out Q1–Q2 2026; network expansion racks available in July; other secure blueprints and Fortanix support scheduled across 2026 quarters.
Why should I read this?
If you care about enterprise or sovereign AI infra, this is the rollout to watch. HPE’s tying together Blackwell/Rubin GPUs, validated object storage, air‑gapped private clouds and an “AI Grid” is about making on‑prem and regional AI deployments actually usable at scale — not just demo fodder. It’s worth a quick skim so you know where vendor momentum (and procurement conversations) are heading.
Context and Relevance
This announcement matters because it combines three critical trends: accelerated compute (new Nvidia GPUs), storage validated for GPU‑heavy AI pipelines, and networked deployment models for distributed inference. For organisations and service providers focused on data sovereignty, regulated workloads or low‑latency edge inference, HPE’s packaged approach reduces integration risk and speeds time to production. It also highlights how Nvidia’s ecosystem (GPUs, DPUs, libraries) continues to set de facto standards for enterprise AI stacks.
