Apple goes all in on AI acceleration with M5 MacBook, iPad, and Vision Pros
Summary
Apple has started shipping its fifth-generation M-series silicon, the M5, in the new MacBook, iPad and Vision Pro devices. Superficially similar to the M4, the M5 brings important architectural changes: up to 10 CPU cores, 10 GPU cores and a 16-core Neural Engine (NPU), with the GPU gaining per-core neural processors (tensor-like cores) to accelerate matrix ops used by generative AI and ML workloads.
The GPU improvements reportedly deliver 30–45% better graphics performance and up to 4x the AI compute versus the M4, while the CPU is claimed to be up to 15% faster in multi-threaded tasks. Unified memory options are 16, 24 or 32 GB with roughly 30% higher bandwidth (c.153 GB/s). The M5 is fabbed on TSMC 3nm and is available in devices from US$999 (iPad) to US$3,499 (Vision Pro).
Key Points
- M5 introduces per-GPU-core neural processors to speed matrix operations common in generative AI and ML.
- Apple claims the M5 GPU offers 4x the AI compute of the M4; specific TFLOPS figures were not disclosed.
- Graphics performance improved by ~30–45% in GPU-heavy workloads thanks to upgraded shader and ray-tracing cores.
- CPU maintains a 4+6 core layout but is up to 15% faster in multi-threaded tasks compared with M4.
- Memory bandwidth increased by ~30% to ~153 GB/s; unified memory remains fixed at purchase (16/24/32 GB).
- Neural Engine (NPU) enhancements aim to accelerate background and on-device ML tasks, though Apple did not publish TOPS numbers.
- M5 is launching in MacBook, iPad and Vision Pro lines now; other Macs (Air, Mini, Studio) will follow later.
Why should I read this?
Because if you care about on-device AI, faster image and text generation or smoother Apple Intelligence features, this is the update that actually matters. It’s not just a modest speed bump — Apple has stitched tensor-style cores into GPU cores, which could seriously improve local LLM and image workloads. If you’re buying an Apple device in the next year or building apps for macOS/iPadOS/visionOS, this shapes what’s possible on-device.
Context and Relevance
Apple is clearly prioritising AI acceleration in silicon, aligning hardware with OS-level AI features. The move narrows the gap with specialised AI hardware from the wider ecosystem: memory bandwidth now sits alongside Qualcomm’s X2 Elite and Intel’s Panther Lake figures, and the per-core neural processors echo trends seen in discrete GPU tensor cores.
For developers and pros, faster AI on the device reduces reliance on cloud inference for many tasks, improving latency and privacy. For buyers, it means better on-device generative experiences and improved graphics — but remember unified memory is fixed at purchase, so spec wisely.
Competitors (Intel Panther Lake, Qualcomm X2 Elite) are due to ship later and offer different trade-offs (more CPU cores, discrete graphics options), so M5’s early launch gives Apple a head start in integrated on-device AI performance.
Author style
Punchy: This is a major silicon update that isn’t just marketing spin — Apple has added real AI-focused hardware changes. If you follow device AI or choose Apple kit for work or dev, read the detail; these changes affect what developers can run locally and what users will experience day-to-day.
Source
Source: https://go.theregister.com/feed/www.theregister.com/2025/10/15/apple_goes_all_in_on/
