Not all networks can handle AI traffic – and experts are sounding alarms

Not all networks can handle AI traffic – and experts are sounding alarms

Summary

A new Omdia study and industry voices warn that networking — not just raw GPU compute — is becoming the bottleneck for AI workloads. Many rent‑a‑GPU “neocloud” providers have rapidly expanded compute but lag on network capability, leaving enterprises at risk of poor performance, latency, and sovereignty issues. Providers’ networking maturity varies widely depending on their origins (mining, CDN, hosting), and many are now scrambling to partner, buy or build better connectivity.

Global network vendors such as Lumen are urging organisations to check whether their networks are “AI‑ready”: adaptable, programmable and consumption‑based to cope with constant data movement between clouds, datacentres and edge endpoints. The piece also cites Imperva research that automated agents now account for a majority of internet traffic, increasing sustained network demand from AI agents, bots and services.

Key Points

  • Omdia finds neoclouds often have ample GPU compute but inconsistent networking, which can become the critical constraint for AI workloads.
  • Neocloud origins (crypto mining, CDN, hosting) shape their networking capabilities and strategic choices today.
  • Enterprises should assess providers on connectivity, latency, resilience and data sovereignty — not only on GPU counts.
  • Network vendors like Lumen are positioning programmable, scalable networks as essential infrastructure for AI adoption.
  • Imperva’s 2025 report suggests automated traffic now exceeds human traffic (~51%), meaning continuous, heavy network loads from AI agents.
  • Successful AI deployments will need networks that can scale dynamically across backbone, cloud and edge locations.

Context and relevance

This article matters because as AI moves from experiments to production, the assumptions that served traditional apps no longer hold. AI workloads often stream large datasets, perform frequent model calls, and distribute inference across locations — all of which put sustained pressure on latency, throughput and routing. For CIOs, SREs and procurement teams, choosing an AI compute supplier without verifying network architecture is a real operational risk.

It also ties into broader trends: distributed inference at the edge, rising automated/bot traffic, and greater regulatory focus on data sovereignty. Network architecture (programmability, observability, and consumption pricing) is becoming as strategic as compute when it comes to delivering reliable AI services.

Why should I read this?

Short version: if you’re buying GPU time or building AI services, the network can wreck your day. This article saves you the hassle of finding out the hard way — it flags where vendors skimp and what questions you should be asking now. Read it if you care about latency, cost leakage from bad egress, or keeping data where it belongs.

Author note

Punchy take: this isn’t just another cloud checklist item — it’s a make‑or‑break infrastructure call. If your AI roadmap relies on distributed or multi‑cloud inference, treat network design and vendor due diligence as top priorities. Consider it essential reading for procurement and architecture reviews.

Source

Source: https://go.theregister.com/feed/www.theregister.com/2026/04/15/networks_not_ready_for_ai_challenges/