Meta reveals four Broadcom-built custom AI chips, claims some outperform commercial silicon

Meta reveals four Broadcom-built custom AI chips, claims some outperform commercial silicon

Summary

Meta has publicly detailed four previously unannounced MTIA (Meta Training Inference Accelerator) chips — the MTIA 300, 400, 450 and 500 — developed in close partnership with Broadcom. The designs use a modular chiplet approach with HBM stacks, network chiplets and arrays of processing elements (PEs) that include RISC-V vector cores. MTIA 300 is already in production; MTIA 400 is described as having raw performance competitive with leading commercial products and is on the path to data centre deployment; MTIA 450 and 500 add stepped increases in HBM bandwidth and GenAI inference optimisations, with mass rollouts planned for 2027. Meta says it can ship a new chip roughly every six months and Broadcom expects installations at the scale of multiple gigawatts in 2027 and beyond.

The article also flags a contrast: while Meta advances custom silicon fast, its Oversight Board criticised the company for failing to reliably detect and label AI-generated misinformation during a conflict. Separately, Meta has introduced an advertiser “location fee” to offset local digital taxes in six countries, including the UK and several EU states.

Key Points

  • Meta announced four MTIA models (300, 400, 450, 500) built with Broadcom; each targets AI inference and ranking/recommendation workloads.
  • MTIA 300 is a communications chip for R&R workloads and is already in production.
  • MTIA 400 supports generative AI and R&R; Meta claims it is competitive with leading commercial silicon and is moving towards deployment.
  • MTIA 450 doubles MTIA 400’s HBM bandwidth for much higher GenAI inference performance; mass deployment early 2027.
  • MTIA 500 increases HBM bandwidth another 50 per cent over 450, uses a 2×2 compute-chiplet layout and includes an SoC chiplet for PCIe and NICs; planned for 2027.
  • Meta emphasises a reusable, modular design across chiplets, chassis, racks and network infra, enabling a roughly six-month chip cadence.
  • Broadcom expects “multiple gigawatts” of these chips to be installed in Meta data centres from 2027 onward.
  • The Oversight Board criticised Meta’s ability to flag AI-generated misinformation, showing moderation and safety lag behind hardware advances.
  • Meta introduced an advertiser “location fee” to offset local digital services taxes in Austria, France, Italy, Spain, Türkiye and the UK.

Why should I read this?

Quick heads-up: if you care about AI hardware, cloud economics or who wins the datacentre wars, this one’s for you. Meta’s quietly building chip muscle that could shake up the Nvidia-dominated market, and it’s doing it at scale. Also — and annoyingly — while its silicon gets cleverer, its content moderation still trips over obvious AI fakery. Short version: tech progress + policy headaches = worth your two-minute skim.

Context and Relevance

This announcement matters because it underlines a growing trend: hyperscalers designing bespoke accelerators to cut unit costs, raise performance and control supply chains. Custom silicon at Meta’s scale can change vendor dynamics, influence model placement (on-prem versus cloud), and drive new rack and network architectures. The timeline — mass deployment in 2027 and a six-month chip cadence — signals faster iteration cycles for specialised AI hardware. The moderation criticism and the new advertiser location fees also remind readers that technical capability and governance/policy outcomes often move at different speeds.

Author style

Punchy: read the specs and the rollout plan. If you work in AI infrastructure, cloud procurement, or datacentre ops, this is high-impact — Meta’s scale plus modular chiplets could force architecture and pricing shifts across the industry.

Source

Source: https://go.theregister.com/feed/www.theregister.com/2026/03/12/meta_custom_chips/