An AI Image Generator’s Exposed Database Reveals What People Really Used It For
An unsecured database linked to the GenNomis AI image-generation platform exposed vast amounts of data, including tens of thousands of explicit AI-generated images—some potentially illegal. Following inquiries from WIRED, the company promptly shut down its websites.
Key Points
- Over 95,000 records were found in the exposed database, including explicit images and prompts.
- Content included AI-generated child sexual abuse material (CSAM) and sexually explicit images of celebrities reimagined as children.
- Security researcher Jeremiah Fowler discovered the database, which was open and unprotected.
- GenNomis’ website previously allowed the generation of unrestricted content, raising concerns over moderation of explicit material.
- The incident highlights ongoing issues with AI-generated CSAM and the ease with which harmful content can be produced.
Why should I read this?
This article sheds light on the serious risks associated with generative AI technologies, particularly regarding data security and the creation of harmful content. It underscores the need for stronger safeguards and regulations to prevent abuse in the realm of AI-generated images, making it an essential read for anyone interested in technology, ethics, and online safety.