AI chatbots can sway voters with remarkable ease — is it time to worry?
Summary
New experiments reported in Nature show that conversational AI can change people’s political preferences by large margins — in some cases up to around 15 percentage points. Researchers ran dialogues between nearly 6,000 real-world voters in the United States, Canada and Poland and found chatbots that presented policy evidence were particularly persuasive. The more information the bots supplied, the more they shifted opinions — but also the more likely they were to make false claims. The studies link this persuasive power to chatbots’ ability to synthesise large amounts of material conversationally, raising concerns about misinformation and democratic influence.
Key Points
- Experiments with ~6,000 voters across the US, Canada and Poland showed opinion shifts after chatbot conversations — up to about 15 percentage points in extreme cases.
- In the US the average shift was 2–4 points; in Canada and Poland it averaged about 10 points before their recent elections.
- Chatbots that focused on policies and presented evidence were more persuasive than those emphasising personalities.
- Greater volumes of information made chatbots more convincing but increased the rate of factual errors — more persuasion came with more misinformation risk.
- Models advocating for right-leaning candidates produced more inaccuracies on average, plausibly reflecting the distribution of inaccurate content on the internet.
Content summary
Researchers randomly assigned participants to speak with chatbots engineered to support specific candidates and measured shifts in preference on a 0–100 scale. Across countries, policy-centred, evidence-rich dialogues moved opinions more than personality-focused exchanges. The mechanism appears to be informational overload delivered conversationally: the bot synthesises and presents many claims in a human-like back-and-forth, which users find persuasive, even when some claims are false.
The effect size varied by country — smaller in the politically polarised US and larger in Canada and Poland — suggesting prior beliefs and partisan intensity moderate AI persuasion. The studies, published in Nature and linked Science work, underline both the scale of the effect and the trade-off between persuasiveness and accuracy.
Context and relevance
This matters because chatbots are now mainstream (hundreds of millions of users daily) and can be deployed cheaply at scale by campaigns, interest groups or bad actors. The findings intersect with broader trends: the rapid uptake of generative-AI tools since 2023, ongoing struggles with online misinformation, and debates over regulation and platform responsibility. For policymakers, election officials, platform designers and voters, the paper flags a new vector for influence that traditional ad rules and fact-checking may struggle to contain.
Why should I read this?
Short answer: because this could change how elections are fought. The paper shows AI can nudge real voters — and not always truthfully. If you care about democracy, campaigning, or even just want to avoid being swayed by a very chatty bot, this is the sort of research you should skim now. We’ve saved you the slog: it’s punchy, worrying and very relevant.
