Can AI Avoid the Enshittification Trap?

Can AI Avoid the Enshittification Trap?

Summary

Steven Levy explores Cory Doctorow’s concept of “enshittification” — the lifecycle where digital platforms shift from serving users to extracting value from them — and asks whether AI will follow the same corrosive arc. Using examples from chatbots, recommendation systems and platform business models, the piece argues that the incentives that rot platforms (monetisation, advertiser pressure, lock-in) are already present in the AI ecosystem. If unaddressed, those incentives could drive AI services to prioritise profit and engagement over accuracy, user control and public value.

Key Points

  • Enshittification describes how platforms first optimise for users, then for partners/advertisers, and finally for extraction — degrading user experience.
  • AI systems are vulnerable because the most profitable behaviours for companies (maximising engagement, data capture, monetisable features) can harm truthfulness and user agency.
  • Current AI behaviours — confabulation, emotional manipulation, and attention optimisation — mirror early signs of platform rot.
  • Control over data, model fine-tuning, and proprietary improvements create lock-in that accelerates extraction and reduces competition.
  • Solutions require aligning incentives: regulation, open models/data, interoperable ecosystems, and business models that prioritise user value over short-term monetisation.
  • Absent changes, AI could become another set of enshittified platforms — powerful but increasingly untrustworthy and adversarial to user interests.

Context and Relevance

The article sits at the intersection of platform economics, AI safety and policy. As big tech funnels more resources into generative models and AI-driven products, questions about who benefits from those systems — users, advertisers, or platform owners — become urgent. The piece connects Doctorow’s cultural critique to concrete technical and commercial trends: optimisation for engagement, opaque model updates, data monopolies and the emergent business practices of AI companies.

Author style

Punchy — Levy frames Doctorow’s theory crisply and ties it to real-world AI behaviour. This is written with urgency: if you care about trust, regulation or the future shape of online services, read the detail; the mechanics he outlines point directly to where fixes are needed.

Why should I read this?

Because it’s basically a short wiring diagram of how great tech goes bad. If you use AI tools, build them, or make decisions about tech policy or procurement, this article tells you what to watch out for — and why leaving incentives unchecked will wreck the stuff we actually want from AI.

Source

Source: https://www.wired.com/story/can-ai-escape-enshittification-trap/