Why AI writing is so generic, boring, and dangerous: Semantic ablation
Summary
Claudio Nastruzzi coins and explains “semantic ablation”: the systematic erosion of high-entropy, high-precision content when LLMs refine text. It isn’t a simple bug but a structural outcome of greedy decoding, RLHF and aggressive “safety/helpfulness” tuning. In practice, models nudge writing toward the statistical mean, replacing rare, precise tokens and unconventional metaphors with safe, low-perplexity substitutes. The result looks polished but loses specificity, nuance and intellectual density — a “JPEG of thought.”
Key Points
- Semantic ablation is the subtractive bias where LLMs remove rare, high-information tokens during refinement rather than inventing false facts.
- Causes include greedy decoding, reinforcement learning from human feedback (RLHF), and safety/helpfulness penalties that favour generic output.
- It can be measured as entropy decay: successive refinement loops collapse vocabulary diversity (type-token ratio).
- Nastruzzi describes three stages: metaphoric cleansing (loss of vivid imagery), lexical flattening (replacement of specialist terms with generic synonyms), and structural collapse (forcing complex reasoning into predictable templates).
- Unlike “hallucination” (adding false content), semantic ablation destroys existing signal — the subtle, high-entropy pieces that carry meaning and originality.
- The cultural and practical risk: normalising ablated output damages expertise transmission, flattens discourse, and builds decision-making on hollowed language.
- Naming the problem matters: recognising semantic ablation is a first step toward measurement and mitigation, especially for content creators, researchers and policymakers.
Context and relevance
This piece is important for anyone who produces, edits or relies on written content enhanced by AI. It reframes a quality problem — not just hallucinations but systematic loss of nuance — as an algorithmic bias. That matters for journalism, academic writing, technical documentation and legal or medical communication where precision and domain-specific vocabulary are essential. The article connects trends in model tuning and safety controls to downstream effects on knowledge quality and cultural expression.
Why should I read this?
Because if you use AI to “polish” copy, you’re probably letting it sand away the bits that make your work sharp. This isn’t just aesthetic: it’s about losing meaning, expertise and subtlety. Read it to spot when your drafts are being dumbed down and to argue for better metrics and controls. Short version: don’t trust “nice-sounding” AI polish to keep your edge.
Author style
Punchy — the author names a precise failure mode and makes the stakes clear: accepting ablated output accelerates a race to the middle that erodes substantive thought. If you care about quality or domain accuracy, this is a significant call to action.
Source
Source: https://go.theregister.com/feed/www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
