So Long, GPT-5. Hello, Qwen

So Long, GPT-5. Hello, Qwen

Summary

WIRED’s Will Knight reports that Qwen — Alibaba’s open-weight large language model — has emerged as a major player as the AI landscape shifts away from closed, benchmark-chasing behemoths like GPT-5. Despite not being the top-scoring model on standard benchmarks, Qwen’s openness, ease of modification and wide adoption (from academic papers at NeurIPS to integrations in products such as Rokid smart glasses and BYD dashboards) have driven rapid uptake. In contrast, some US models (including GPT-5 and Meta’s Llama 4) disappointed on performance or availability, pushing developers to favour Chinese open models that are simple to tinker with and widely documented.

Key Points

  • Qwen (通义千问) is an open-weight model from Alibaba that’s become popular because it’s easy to download, modify and deploy.
  • Developers and researchers used Qwen extensively at NeurIPS and across papers and projects — signaling strong academic engagement.
  • Practical integrations include Rokid smart glasses (real‑time translation/transcription), BYD car dashboards and adoption by firms such as Airbnb, Perplexity and Nvidia.
  • Open Chinese models surpassed US open-model downloads on HuggingFace in mid‑2025, reflecting a shift in developer preference.
  • GPT-5 and other US models underwhelmed some users and critics; a move toward closed development and narrow benchmark optimisation has alienated parts of the community.
  • Qwen’s popularity underlines that a model’s real value is measured by how widely it’s used to build other things, not just benchmark scores.

Why should I read this?

Short version: if you care which models people will actually build on in 2026, read this. It’s a quick read that explains why openness and tweakability are winning over raw benchmark bragging — and why GPT-5’s shine might be fading in the face of practical, hackable alternatives. We’ve saved you sifting through forum threads and paper appendices.

Context and Relevance

This piece matters because it highlights a likely industry pivot: from closed, centrally hosted frontier models to widely accessible open-weight ones that lower the barrier for product teams and researchers. That affects: developers (easier prototyping and on-device deployments), enterprises (faster integration and customisation), academics (reproducibility and citation), and geopolitics of AI (Chinese firms sharing engineering details more openly than some US counterparts). If you follow AI tooling, investment, or regulation, the trends here are directly relevant.

Author

Will Knight — Senior Writer covering artificial intelligence (AI Lab newsletter). Punchy, informed and tracking the real-world impacts of AI engineering choices.

Source

Source: https://www.wired.com/story/expired-tired-wired-gpt-5/