Zuck forms Meta Compute to pave the planet with ‘hundreds of gigawatts’ of AI datacenters
Summary
Meta has created a new initiative called Meta Compute to centralise planning, deployment and operations for a vastly expanded fleet of AI datacentres. Mark Zuckerberg says the company intends to build “tens of gigawatts this decade, and hundreds of gigawatts or more over time.” Santosh Janardhan will lead the technical and operational side, while Daniel Gross will head long-term capacity strategy and supplier partnerships. Dina Powell McCormick will focus on government and sovereign partnerships, financing and deployment.
Meta is already pursuing multi‑gigawatt datacentre projects in Ohio, Louisiana and Texas and has signed long‑term nuclear power contracts (TerraPower, Oklo, Vistra and existing deals) totalling roughly 6.6GW so far. The move accompanies heavy capital spending on AI and a shift in model strategy from open Llama releases to proprietary codenamed efforts (Avocado, Mango), even as some models remain open-sourced.
Key Points
- Meta Compute is a new centralised org to manage Meta’s global AI datacentre expansion.
- Zuckerberg publicly set ambitious scale targets: tens of gigawatts this decade and hundreds over time.
- Leadership: Santosh Janardhan (infrastructure/engineering) and Daniel Gross (capacity strategy/suppliers); Dina Powell McCormick to handle government and financing ties.
- Meta is already building gigawatt-scale sites in multiple US states and has contracted about 6.6GW of nuclear power to meet demand.
- The push comes amid heavy capex for AI, changes in model strategy (from open Llama releases to proprietary codenames) and intense competition for talent and technology.
Author’s take
Punchy: This is not tinkering — it’s an all‑in industrial play. If Meta nails the energy, supply and political pieces, it could lock in a major infrastructure advantage for large‑scale generative AI. If it stumbles, expect wasted capital and local backlash. Either way, it’s big and worth watching closely.
Context and relevance
Why this matters: AI models at the scale Meta is targeting need enormous, reliable power and custom infrastructure. Meta Compute signals hyperscalers moving from software-first experiments to heavyweight industrial builds — encompassing power contracts (including nuclear), site construction, silicon and software stacks, and government-level partnerships. For readers tracking AI supply chains, datacentre siting, energy policy or corporate strategy, this ties into broader trends: escalating capex, competition with OpenAI/Nvidia/Google, and the geopolitical and environmental debates around massive datacentre footprints.
Why should I read this
Short version: Zuck’s gone datacentre‑mad and is lining up power and people to match. If you care about where AI compute will live, who will pay for the juice, or how this reshapes tech geopolitics and local planning fights — read it. If you just want the headlines: big build, nuclear deals, new org to run it.
Source
Source: https://go.theregister.com/feed/www.theregister.com/2026/01/12/meta_compute/
