The votes are in: AI will hurt elections and relationships

The votes are in: AI will hurt elections and relationships

Summary

Stanford HAI’s 2026 AI Index Report — a 423-page survey of the current AI landscape — finds rapid mass adoption, rising real‑world harms, and a persistent gap between capability and responsible use. AI reached roughly 53% of the population within three years, while documented harmful incidents rose to 362 in 2025 (up from 233 in 2024). Both experts and the public largely agree: the clearest negative impacts will be on elections and personal relationships.

The report highlights big capability gains (models near‑perfect on some coding benchmarks) alongside worrying weaknesses: wildly variable hallucination rates across models (22–94%), poor performance on mundane tasks (e.g. GPT‑5.4 reads analog clocks correctly ~50.6% of the time), and robots managing only around 12% of household tasks in simulation. It also notes China is closing the performance gap with US models, even as US investment dwarfs China’s and as the US loses incoming AI technical talent.

Key Points

  • AI adoption climbed rapidly — estimated 53% of the population in three years; 88% of organisations report using AI.
  • Documented harmful AI incidents rose to 362 in 2025, up from 233 in 2024, indicating incidents are increasing with adoption.
  • Experts and the US public agree AI is likely to harm elections and personal relationships.
  • Responsible AI practices and safety benchmarks are not keeping pace with capability and deployment.
  • Benchmarks show mixed results: coding ability surged, but hallucination rates vary widely (22–94%), and simple real‑world tasks remain challenging for models and robots.
  • Public trust in government regulation is low in the US (31% trust), amid industry lobbying and contested policy moves.
  • China is closing the performance gap on leading benchmarks, while the US still leads in investment ($285.9bn in 2025) but is seeing a steep fall in incoming AI researchers and developers to the US.

Context and relevance

This report is a broad snapshot useful to policymakers, security teams, campaign strategists, tech leaders and anyone worried about societal effects of AI. It ties together adoption statistics, incident data and benchmark results to show that capability growth is outpacing governance and safe practice — a central concern for regulation, election integrity, misinformation mitigation, and social cohesion. The geopolitical note — Chinese models rapidly catching up — matters for competition, standards and supply chains.

Why should I read this

Quick version: it’s the big, nerdy state‑of‑AI roundup you won’t want to miss. If you care about elections, public trust, workplace futures or who wins the AI arms race, this cuts through the noise. We’ve skimmed the heavy bits so you don’t have to — but the stats and takeaways are worth a proper read.

Author style

Punchy: the report isn’t just another academic paper — it’s data-heavy and alarming where it needs to be. Given the clear signals about elections and relationships, the findings deserve attention from decision‑makers and risk managers right now.

Source

Source: https://go.theregister.com/feed/www.theregister.com/2026/04/14/ai_report_2026_stanford_hai/