Your datacentre’s power architecture called. It’s not happy

Your datacentre’s power architecture called. It’s not happy

Summary

For years datacentres were engineered around 12V and 48V rack architectures optimised for roughly 10–15 kW per rack. Accelerated computing — GPUs and AI accelerators — have upended that model, with modern AI racks targeting 100+ kW (Nvidia’s NVL72 cited at ~120 kW). Delivering that power at 48V requires kiloamperes of current, which drives massive copper busbars, overheating connectors, significant resistive losses and serviceability headaches. The industry is shifting to high-voltage DC (400V/800V) distribution: central AC-to-DC conversion at the facility edge, HVDC busing across the floor, and efficient DC/DC conversion at rack or shelf level. GaN/SiC semiconductors and local energy storage are key enablers for efficiency and to handle millisecond-scale GPU transients.

Key Points

  • AI racks can demand 100+ kW, forcing currents of kiloamperes at low voltages and creating thermal, loss and mechanical problems.
  • Resistive losses scale with I²R; higher currents at 48V cause large Joule heating and wasted power that could otherwise run compute.
  • Moving to 400V or 800V DC cuts currents dramatically, reducing losses, copper mass and connector heating by orders of magnitude.
  • Centralising AC-to-DC conversion at the facility perimeter (to HVDC) simplifies conversion chains and eases integration with battery backup.
  • GaN and SiC power devices, plus on-rack energy storage, are essential to manage conversion efficiency and sub-second GPU power spikes.
  • ORv3/48V ecosystems remain important but are nearing practical limits; a transition to standardised HVDC rack architectures is already underway.

Content summary

The article explains the physical and engineering reasons why legacy low-voltage rack power architectures struggle with modern AI loads. Concrete examples and numbers show that delivering the same power at 48V requires very high currents, which increase resistive losses (Joule heating) and demand heavy copper busbars and beefy connectors. Even small increases in contact resistance produce substantial local heating at kiloampere currents.

It outlines the emerging power hierarchy: medium-voltage AC enters the facility (~13.8 kV), is rectified once to high-voltage DC at the perimeter, and that HVDC is bused to racks. Racks accept 800VDC feeds and use efficient DC/DC converters to step down for shelves and servers. Vendors including Nvidia, Eaton, Vertiv and Delta are developing 800V-compatible rectifiers, converters and components. The piece also notes the role of wide-bandgap semiconductors (GaN, SiC) and energy storage to smooth rapid load transients common in distributed GPU workloads.

Context and relevance

If you plan, build or operate AI-dense datacentres, this is material. Power architecture choices now shape capital layout, cooling, mechanical supports and long-term upgradeability. The move to HVDC ties into broader trends in electrification, battery-backed infrastructure and adoption of GaN/SiC power electronics. Hyperscalers and vendors are already prototyping 800V solutions, so new projects should consider future-proofing for HVDC to avoid expensive retrofits.

Why should I read this?

Because if you sign off budgets or racks, you’ll want to know why your neat 48V design might turn into a heavy, hot, copper-laden nightmare when the GPUs arrive. This article spells out the physics and the practical fixes — read it so you don’t get surprised by amps, heat and a painful upgrade bill later.

Author style

Punchy: the piece cuts through the engineering detail and makes the case that the shift to HVDC isn’t marketing — it’s physics. If you’re responsible for infrastructure planning, consider this essential reading rather than optional background noise.

Source

Source: https://go.theregister.com/feed/www.theregister.com/2026/03/11/your_datacenters_power_architecture_called/