Global regulators say AI image tools don’t get a free pass on privacy rules
Summary
A coalition of more than 60 privacy regulators, including the UK Information Commissioner’s Office (ICO) and Ireland’s Data Protection Commission (DPC), has issued a joint statement warning that generative AI systems that create realistic images and video must comply with existing data protection law. The group highlights harms such as non-consensual intimate imagery, defamatory uses, and risks to children and other vulnerable groups. Regulators stress that organisations should embed safeguards by design and can expect enforcement action where obligations are not met—they cite recent probes into xAI as an example.
Key Points
- More than 60 data protection authorities signed a joint statement insisting AI image/video generation must follow privacy laws.
- Regulators are particularly concerned about non-consensual intimate imagery, misuse of likeness, defamation and harms to children.
- The ICO and DPC have opened formal probes (for example, into xAI’s Grok) after reported cases of sexual images generated without consent.
- Organisations are urged to build risk assessment and privacy safeguards into AI systems from the start (privacy-by-design).
- The statement makes clear that “it came from a machine” is not a defence; regulators will ask tough questions and take action where legal obligations are ignored.
Context and relevance
This announcement formalises growing regulatory scrutiny of generative AI across jurisdictions and ties AI image generation directly to existing data-protection frameworks (such as GDPR). It matters to AI developers, platform operators, social networks, and legal/compliance teams because it signals coordinated enforcement and a demand for demonstrable safeguards around personal data and image use. The move is part of broader trends: tighter social-media rules, deepfake frameworks, and investigations into misuse of AI-generated content.
Why should I read this?
Short version: if you build, ship or host tech that makes believable images of people, this affects you. Regulators aren’t bluffing — expect audits, probes and the need for proper privacy controls. Read this to avoid getting blindsided.
Author’s take
Punchy and to the point: this is a wake-up call. Firms pushing generative visuals into everyday products must prioritise people over novelty or face legal fallout. If your team touches models, datasets or moderation for image/video, treat this as high priority.
Source
Source: https://go.theregister.com/feed/www.theregister.com/2026/02/23/privacy_watchdogs_ai_images/
