Aid groups use AI-generated ‘poverty porn’ to juice fundraising efforts
Summary
Researchers publishing in Lancet Global Health report that charities and aid organisations are using AI-generated images that reproduce the emotive style of classic “poverty porn” to drive donations. The study, led by Arsenii Alenichev and colleagues, collected over 100 AI-produced images from social platforms and found many replicate exploitative visual tropes that exaggerate vulnerability to elicit guilt-based giving.
The paper highlights that stock-image marketplaces and creative platforms (for example, Adobe Stock and Freepik) are enabling these images by licensing or making generative tools discoverable. Past controversies (MSF’s photo ethics, Amnesty’s removal of AI imagery) are cited, and evidence is presented that donors exposed to synthetic images may be less likely to give. The authors call for transparency about AI use (including disclosing prompts), accountability from platforms, and stronger support for local photographers and authentic representation.
Key Points
- Academics found >100 AI-generated images used by charities that mimic the style of “poverty porn” to solicit donations.
- Stock and creative platforms (eg Adobe, Freepik) are implicated for distributing or enabling such imagery through marketplaces and AI tools.
- Using synthetic images may seem ethically safer (no identifiable people), but it can still perpetuate harmful stereotypes and bias against Black and Brown communities.
- Previous high-profile missteps (MSF, Amnesty) show the reputational risks of exploitative imagery — the AI angle compounds those risks.
- Research suggests donors are less likely to give when they know imagery is AI-generated, undermining long-term trust.
- Authors recommend disclosure of AI usage and prompts, and urge support for local photographers to produce dignified, authentic representation.
Why should I read this?
Look — if you work in comms, fundraising, tech policy or run a charity, this is one you need to skim. It shows how cheap, easy AI image tools are churning out cliché, exploitative visuals that can damage trust and even reduce donations if people find out they’re fake. The paper’s recommendations (disclose AI, back local creators) are practical and matter if you care about ethics and your organisation’s reputation.
Context and relevance
This issue sits at the intersection of AI ethics, platform responsibility and humanitarian practice. As generative image tools become ubiquitous, organisations under budget pressure may favour AI assets — but that short-term gain can erode public trust and reinforce harmful stereotypes. The article is relevant to current debates about platform regulation, content governance, and the need for clearer rules on synthetic media in international frameworks.
For communicators: rethink image sourcing policies and consider disclosure. For policymakers: the case strengthens the argument for standards around synthetic imagery. For technologists and platform operators: it underlines the reputational and ethical consequences of how AI-generated content is indexed, labelled and sold.
