The political effects of X’s feed algorithm
Summary
This Nature study reports a large randomised experiment (July–September 2023) with active US X users that tested the causal effects of switching X’s algorithmic feed (“For you”) on and off relative to a chronological feed (“Following”). Researchers recruited participants via YouGov, randomised feed settings for roughly seven weeks, collected surveys and — for a subset — captured actual feed posts using a Chrome extension. The authors find that turning X’s algorithm on moved users’ policy attitudes and opinions about current events in a more conservative direction, increased engagement and raised the likelihood of following conservative and activist accounts. Crucially, switching the algorithm off did not reverse those effects, suggesting persistence via changed followings and exposure.
Key Points
- Randomised, independent experiment on 4,965 post-treatment respondents over ~7 weeks; high compliance (self-reported ~85%).
- Turning the algorithm on (for users who previously used chronological) increased engagement and shifted policy attitudes toward Republican priorities (effect sizes ~0.11–0.12 s.d. on aggregated measures).
- Algorithmic feeds showed more engaging posts (many more likes/reposts/comments), promoted political content and prioritised conservative content and political activists while demoting traditional news outlets.
- Exposure to the algorithm increased following of conservative and conservative-activist accounts, creating a persistent change in users’ feeds that helps explain asymmetry: switching off the algorithm did not undo attitude changes.
- No detectable effects on partisanship or affective polarisation over the study period; effects were concentrated among Republicans and Independents and may not generalise beyond active X users or to other platforms.
Content summary
The experiment exploited X’s two-tab feature (chronological “Following” vs algorithmic “For you”). Participants initially on chronological who were randomly switched to the algorithmic feed became more engaged and more likely to report conservative policy priorities, view Trump investigations as unacceptable and hold less favourable views of Ukraine’s president. The algorithmic feed showed posts with far higher engagement metrics and a higher share of conservative political content; it also amplified posts from political activists and entertainment accounts while reducing posts from news organisations. Behavioural data show that exposure made users follow more conservative activist accounts, producing longer-term changes in the information they saw even after a potential switch back to chronological order. The reverse treatment — switching already-algo users to chronological — produced little change in attitudes or followings, consistent with persistence through followed accounts. The study is independent of X, pre-registered, ethically approved and includes robustness checks, LATE estimates for compliers, and analyses of attrition and content annotations (Llama 3-based classifiers validated against other methods and human coders).
Context and relevance
This work addresses a puzzle from prior experiments that found no political effects from toggling algorithms: it shows the missing link may be persistence through follow behaviour. The findings matter for debates on platform design, moderation and regulation because they show that feed algorithms can shape political opinions indirectly by steering whom people follow and what they continue to see. The results are platform- and time-specific (X in 2023), concentrated among active users and stronger for those already receptive to conservative messages. Policymakers, platform designers and researchers should note both the direct amplification of high-engagement and conservative activist content and the downstream behavioural changes that make effects long-lasting.
Author’s take
Punchy summary: this is important. An independently run, large experiment in a top-tier journal shows that feed algorithms don’t just nudge clicks — they can nudge political opinion by surfacing engaging, politically slanted content and by changing who people follow. The asymmetry (on→effects, off→no reversal) is particularly worrying: once an algorithm seeds a change in followings, simple toggles won’t undo the influence.
Why should I read this?
Short version — if you care who shapes political discussion online, read it. The study shows algorithms can tilt what people care about and who they follow, and those follow choices stick. It’s a neat, well-controlled take on why earlier null results from “switch-off” tests might have missed the bigger story.
Source
Article Date: 18 February 2026
Article URL: https://www.nature.com/articles/s41586-026-10098-2
Article image: Figure 1 (feed outcomes)
