An AI Image Generator’s Exposed Database Reveals What People Really Used It For
A recently uncovered unsecured database belonging to a generative AI application has revealed disturbing information regarding the usage of its image-generation capabilities. It exposed tens of thousands of explicit images, including illegal content. Upon being contacted by WIRED, the company behind the application swiftly deleted its websites.
Key Points
- The exposed database contained over 95,000 records, including explicit AI-generated images and prompts.
- It was linked to GenNomis, a South Korean AI image-generation firm, which provided various generative tools for users.
- The database included highly concerning content, such as AI-generated child sexual abuse material and images of celebrities depicted as children.
- Concerns were raised regarding how such AI tools can be weaponised to create harmful and non-consensual imagery.
- The GenNomis website was reported to have included a marketplace for sharing and possibly selling explicit AI-generated images.
Why should I read this?
This article highlights the urgent implications of AI image generation technology and its potential misuse, especially regarding child exploitation and violations of privacy. It serves as a stark reminder of the need for stringent regulations and ethical considerations in the burgeoning field of generative AI, reflecting broader concerns about online safety and accountability in tech.