Intel backs SambaNova’s $350M bid to challenge GPUs in AI inference
Summary
SambaNova has raised $350m, with Intel Capital among the investors, to push its reconfigurable dataflow unit (RDU) architecture as an alternative to GPU-based inference. The companies will enter a multi-year collaboration that includes hardware-software co-design and the use of Xeon CPUs alongside SambaNova’s accelerators.
The startup plans to ship its SN50 accelerators later this year; the SN50 claims 2.5x higher 16-bit floating-point performance and 5x higher FP8 performance over the SN40L (about 1.6 and 3.2 petaFLOPS respectively). Each RDU packs 432MB on-chip SRAM, 64GB HBM2E (1.8TB/s) and 256GB–2TB of DDR5, and offers a switched fabric with 2.2TB/s bidirectional chip-to-chip bandwidth.
On paper the SN50 trails Nvidia’s Blackwell-class GPUs for raw dense FP8 compute, HBM capacity and peak bandwidth, but SambaNova says its dataflow approach reduces data movement and improves end-to-end token-generation economics — claiming up to 5x higher per-user generation speed vs Nvidia’s B200 in some scenarios. SoftBank is named as an early customer.
Key Points
- SambaNova raised $350m to accelerate development and commercialisation of its RDU-based inference hardware and software.
- Intel Capital participated and will collaborate with SambaNova on a multi-year hardware–software co-design programme using Xeon CPUs alongside RDUs.
- The SN50 offers significant generational gains over the SN40L (2.5x 16-bit, 5x FP8) and aims to improve inference economics through a three-tier memory hierarchy and a high-bandwidth switched fabric.
- Despite lower peak FLOPS and bandwidth on paper compared with Nvidia Blackwell GPUs, SambaNova argues its dataflow architecture yields better real-world token-generation performance and utilisation.
- SambaNova emphasises rack-level economics and utilisation — targeting infrastructure sales rather than building a dedicated inference cloud.
Author style
Punchy: this is a clear play to upset GPU dominance in inference. Intel’s backing turns SambaNova from a niche upstart into a vendor with real go-to-market heft and potential access to major customers and distribution. If you follow AI infra, this is worth paying attention to.
Context and Relevance
This funding and Intel tie-up matter because inference is where margins, power and scale meet real-world deployment choices. Nvidia still dominates, but rising memory costs, utilisation challenges as customers run many custom models, and architectural alternatives (dataflow RDUs) create openings for challengers. Intel gains a strategic partner to keep relevance in AI infrastructure; SambaNova gains capital, scale and co-design capabilities.
Why should I read this?
Because if you care about the economics of running LLMs — and whether GPU monopolies stay unchallenged — this story sums up the next big fight. SambaNova claims it can be faster and cheaper in real workloads, Intel brings scale, and that combo could reshape inference buying decisions. Short version: money + Intel = suddenly important.
Source
Source: https://go.theregister.com/feed/www.theregister.com/2026/02/24/sambanova_intel_funding/
