Global data protection authorities warn generative AI companies against replicating real people

Global data protection authorities warn generative AI companies against replicating real people

Summary

Data protection authorities from 61 countries issued a joint statement urging organisations that develop and use generative AI systems to prevent the creation and dissemination of realistic images or videos depicting identifiable people without their consent. The warning follows the Grok chatbot incident, in which millions of “nudified” images of real people were generated and shared, prompting action by its owner and wider regulatory concern.

The regulators emphasise the urgency of safeguards to stop non-consensual intimate imagery, defamatory depictions, cyberbullying and child exploitation. The coalition includes many European states, Canada, South Korea, the UAE, Mexico, Argentina and Peru; notably, the United States did not sign the statement. The group also highlighted existing laws that make some non-consensual intimate imagery illegal and called for proactive engagement with regulators and built-in protections from the outset.

Key Points

  • Sixty-one national data protection authorities released a joint statement on harms from AI-generated imagery depicting identifiable individuals without consent.
  • The statement was prompted by the Grok chatbot creating and sharing millions of non-consensual “nudified” images.
  • Regulators warned of risks beyond intimate images: defamatory depictions, cyberbullying and threats to children.
  • The UK has signalled tougher action: tech firms must remove intimate images shared without consent within 48 hours or face fines up to 10% of qualifying global revenue and possible blocking of services.
  • The US did not sign the joint statement; the coalition spans Europe, Canada, South Korea, UAE, Mexico, Argentina, Peru and others.
  • Regulators called on organisations to engage with authorities, embed robust safeguards early, and prioritise privacy, dignity and safety.

Context and relevance

This joint statement marks growing international regulatory pressure on generative AI providers to manage harms from synthetic imagery. It intersects with ongoing trends: stricter content-removal obligations (the UK’s 48-hour rule and potential heavy fines), rising public outrage over deepfake and non-consensual image generation, and a patchwork of national responses that increase compliance complexity for global AI services.

For developers, platform operators and legal teams, this raises immediate operational and product-risk issues: model training data policies, image-generation guardrails, user reporting and takedown workflows, age and identity protections, and potential liability across jurisdictions. The absence of the US from the statement signals uneven global alignment, but the breadth of signatories suggests momentum behind stronger international norms and enforcement.

Author note (punchy)

Punchy: This isn’t a polite suggestion — regulators are sounding the alarm and setting the stage for hard rules. If you build, host or rely on image-generating AI, this is a red flag you can’t ignore.

Why should I read this?

Short version: read it if you work with AI or run a platform where images or avatars can be created. It tells you what regulators are worried about, who’s coordinating globally, and why non-consensual imagery is becoming a legal and reputational hazard — fast. Saves you time: the statement gives a clear checklist of what regulators expect, so you can act before enforcement lands on your desk.

Source

Source: https://therecord.media/data-protection-authorities-warn-ai-companies-of-sharing-images