Amazon can’t build AI capacity fast enough, throws another $200B at the problem

Amazon can’t build AI capacity fast enough, throws another $200B at the problem

Summary

AWS is pouring huge capital into AI infrastructure: Amazon expects to spend $200 billion in 2026, mostly for AWS, and intends to double datacentre capacity by the end of 2027. CEO Andy Jassy says demand outstrips supply — “as fast as we install this AI capacity, we are monetising it” — and AWS added 3.9 gigawatts of power in the last 12 months. The investment focuses on AI workloads and Amazon’s own Trainium and Graviton chips, which the company says deliver compelling price-performance. Despite strong AWS growth and 35% operating margins, Amazon shares fell in after-hours trading following the announcement.

Key Points

  • AWS plans aggressive spending: $200bn in 2026 with the aim to double capacity by end of 2027.
  • Supply is constrained: AWS added 3.9GW of power in the past year but still can’t keep up with demand.
  • Monetisation is immediate — Jassy says new AI capacity is being filled as soon as it comes online.
  • Financials: AWS Q4 sales $35.6bn (+24% YoY); AWS annual sales $128.7bn (+20%); annualised run rate $142bn.
  • Amazon’s homegrown chips (Trainium, Graviton) are a strategic advantage; Trainium-linked clusters power major model training (Project Rainier).
  • Trainium3 supply will be fully committed by mid-2026; Trainium4 arrives in 2027 with large performance gains.
  • Market reaction: strong fundamentals but stock fell amid broader tech sell-off, highlighting investor sensitivity to heavy capex.

Content summary

On its earnings call Andy Jassy framed the spending as an “extraordinary” chance to expand AWS’s footprint as customers move data and apps to the cloud to exploit AI. While some capital still supports non-AI core workloads, the majority is earmarked for AI infrastructure. Jassy emphasised forecasting experience and confidence in returns on invested capital, arguing this is not a reckless topline grab.

The company highlights its chip roadmap: Trainium2 powers a half-million+ accelerator cluster used by customers such as Anthropic; Trainium3 is already in market and fully subscribed; Trainium4 (2027) promises ~6x compute and ~4x memory bandwidth improvements over Trainium3, and Trainium5 is already being discussed.

Context and relevance

Why this matters: the story crystallises several industry trends — explosive AI demand, datacentre and grid constraints, and a race among hyperscalers to secure capacity and differentiated silicon. AWS’s huge spending plans pressure competitors to scale quickly and may exacerbate supply chain, power and permitting bottlenecks (notably in regions with long grid-connection waits). For CIOs, FinOps teams and cloud architects this shapes pricing, availability and vendor strategy for AI workloads through 2027 and beyond.

Why should I read this?

Short version: if you work with cloud or AI, this one affects your roadmap. Amazon’s effectively telling the market it’ll keep buying compute until demand is sated — and it’s doing so with its own chips. That changes everything from cost models to procurement timelines. We’ve done the skimming so you don’t have to — read this if you want the quick lay of the land.

Source

Source: https://go.theregister.com/feed/www.theregister.com/2026/02/06/amazon_earnings_q4_2025/