Anthropic reveals $30bn run rate and plans to use 3.5GW of new Google AI chips

Anthropic reveals $30bn run rate and plans to use 3.5GW of new Google AI chips

Summary

Broadcom’s regulatory filing reveals two major deals with Google: a long-term agreement to develop custom Tensor Processing Units (TPUs) for Google’s next-generation AI racks, and a supply assurance agreement to provide networking and other components through to 2031. The filing also discloses that Anthropic will access roughly 3.5 gigawatts of the next-gen TPU-based AI compute capacity, starting in 2027.

Anthropic, meanwhile, announced its run-rate revenue has surpassed $30 billion (up from c. $9 billion at the end of 2025) and that more than 1,000 business customers now spend over $1 million annually. Broadcom’s filing flags that Anthropic’s consumption of the pledged TPU capacity depends on Anthropic’s continued commercial success and on arrangements with operational and financial partners, signalling financial and operational risk despite Anthropic’s growth claims.

Anthropic said it will continue to use a mix of cloud and specialised hardware — Google TPUs, AWS Trainium, and Nvidia — to match workloads to the best-suited chips.

Key Points

  • Broadcom will develop and supply custom TPUs for Google’s future generations of TPUs under a long-term agreement.
  • Broadcom also agreed a supply assurance deal to provide networking and other components for Google’s next-gen AI racks through to 2031.
  • Anthropic plans to access approximately 3.5GW of next-generation TPU-based compute via Broadcom starting in 2027.
  • Broadcom’s filing warns Anthropic’s consumption of that capacity depends on Anthropic’s commercial success and on external operational/financial partners, highlighting deployment risk.
  • Anthropic reports a run-rate revenue surpassing $30 billion and more than 1,000 customers each spending over $1m annually — a rapid scaling since end-2025.
  • Anthropic uses a multi-cloud, multi-accelerator strategy (Google TPUs, AWS Trainium, Nvidia) to match workloads to optimal hardware.

Why should I read this

Short version: this is where the sausage gets made. Anthropic is spending big and hardware players like Broadcom and Google are lining up to build the guts. If you follow AI infrastructure, cloud wars, or who’s paying whom for compute, this is proper inside-baseball — and it matters for costs, capacity and competition.

Context and relevance

Why it matters: the announcement ties together three major trends — hyperscalers outsourcing bespoke accelerator design, the rush to secure large-scale AI compute capacity, and the financial strain of provisioning multi-gigawatt AI fleets. Broadcom’s move into custom TPUs for Google underlines how chip design and datacentre networking are converging. Anthropic’s $30bn run-rate and commitment to 3.5GW signal significant demand for specialised silicon, but Broadcom’s regulatory caveat makes clear that deploying that capacity hinges on commercial and financing arrangements. The outcome will affect cloud-provider dynamics, Nvidia’s competitive position, and the economics of large-scale LLM operations.

Source

Source: https://go.theregister.com/feed/www.theregister.com/2026/04/07/broadcom_google_chip_deal_anthropic_customer/