Ayar Labs taps Wiwynn to cram 1,024 GPUs into a photonic rack system
Summary
Ayar Labs and ODM partner Wiwynn have produced a rack-scale reference design that uses co-packaged photonics to interconnect more than 1,024 GPUs across racks. By embedding optical engines and user-serviceable laser modules with compute, the design promises far greater reach and bandwidth than copper while keeping per-rack power in the 100–200 kW range instead of hundreds of kilowatts seen in some ultra-dense copper systems.
The reference platform on show integrates Ayar’s optical engines (TeraPHY / CPO-style chiplets), front-mounted SuperNova laser modules, and a blade layout that prioritises disaggregated rack architectures — compute racks separated from switch or memory racks. The design addresses mechanical, cooling and telemetry challenges associated with co-packaged optics, and will be presented publicly at OFC. Ayar recently closed a $500m Series E to scale production of its co-packaged optics and is pairing this silicon progress with Wiwynn’s mechanical and systems engineering know-how.
Key Points
- Ayar and Wiwynn’s reference design aims to stitch more than 1,024 GPUs into a single logical system using optical interconnects.
- Co-packaged optics (CPO) reduce power and increase reach/bandwidth versus copper and pluggable optics — Ayar claims up to ~3x bandwidth improvements for some connections.
- The reference rack targets 100–200 kW power draw per rack, far lower than some copper-based ultra-dense racks that hit 600 kW.
- Design emphasises disaggregated architectures: separate racks for compute, switching and extended memory become practical without copper reach limits.
- Major engineering issues tackled include liquid-cooling routing, serviceability of laser modules, and embedded telemetry to limit the blast radius of faulty optics.
- Ayar will demonstrate the platform at OFC and is leveraging recent $500m funding to mass-produce CPO chiplets and optical I/O designs.
Why should I read this?
Short version: if you care about building huge AI/HPC boxes without burning down your datacentre or having racks the size of small submarines, this is worth five minutes. Ayar+Wiwynn are showing how photonics might finally unshackle GPU scaling from copper limits — that changes how racks get built, cooled and bought. It’s the sort of infra shift that will affect procurement, PUE and architecture choices for anyone running large-scale AI or HPC.
Context and relevance
Co-packaged optics have been touted for years but struggled with manufacturability, thermal integration and telemetry. Ayar’s approach — pairing optical engines and serviceable lasers with an ODM’s mechanical platform — tackles both silicon readiness and where those chips will live in a datacentre. The result is significant for three trends:
1) AI scale-out: Enables larger logical clusters by linking many racks optically, beyond copper reach.
2) Energy and density: Keeps per-rack power comparable to contemporary systems while massively increasing aggregate accelerator counts.
3) Disaggregation: Makes it practical to separate compute, switching and memory into dedicated racks, simplifying upgrades and replacements.
Organisations planning hyperscale AI infrastructure, HPC centres, and vendors in the server supply chain should pay attention — this could shift design decisions in the next generation of datacentres.
Source
Source: https://go.theregister.com/feed/www.theregister.com/2026/03/11/ayar_labs_wiwynn_photonics/
