People Are Using Sora 2 to Make Disturbing Videos With AI-Generated Kids
Summary
OpenAI’s video tool Sora 2 has been used to create highly disturbing clips that depict AI-generated children in sexualised or fetishised scenarios. Examples include faux toy commercials (the “Vibro Rose” clip) and parodies that reference real-world abusers. Many of these Sora 2 videos have surfaced on TikTok and other platforms, often framed as jokes or shock content, but they clearly attract predatory attention and flout the spirit of child-safety policies.
While OpenAI enforces rules against child sexual abuse material (CSAM), and has safeguards for likeness-based deepfakes, creators are finding ways to circumvent those guardrails. The Internet Watch Foundation reports a sharp rise in AI-generated CSAM reports, and governments in the UK and many US states are creating new legal tools to tackle the problem. Platforms like TikTok and OpenAI have removed accounts and videos, but much problematic content remains accessible.
Key Points
- Sora 2 has produced photorealistic videos showing AI-generated children in sexualised toy ads and fetish contexts, which have spread to TikTok.
- Examples include the “Vibro Rose” parody and other clips that signal sexual intent while using synthetic minors rather than real children.
- The Internet Watch Foundation reports a marked year-on-year increase in AI-generated CSAM reports, with most illegal images depicting girls.
- OpenAI enforces bans and safety features (including reporting to authorities) but creators can still work around filters using contextual cues and euphemisms.
- Platforms and regulators are under pressure: the UK is adding testing powers to its Crime and Policing Bill, and many US states have criminalised AI-generated CSAM.
- Moderation is hard because intent and context matter — content that seems like dark humour can be a grooming or farming vector for predators.
- Advocates call for “safe by design” AI, improved moderation practices, better-trained teams, and clearer legal tools to prevent abuse at the creation stage.
Why should I read this?
Look — this is not just another creepy internet story. It shows how quickly powerful generative tools can be twisted into something genuinely harmful, and how platforms are scrambled trying to keep up. If you care about online safety, children’s protection, platform moderation or AI policy, you’ll want the heads-up on what’s already slipping through the cracks.
Author’s take
Punchy and blunt: this is a must-watch issue. The piece flags an urgent failure mode of current AI tools — not a hypothetical future risk but a present reality. Read the detail if you care about how policy and product teams need to change right now.
Context and relevance
This article sits at the intersection of rapid AI adoption, platform moderation limits, and mounting legal responses. It illustrates trends we’re seeing across 2024–25: AI content that sexualises minors is increasing; detection and enforcement are struggling because models can generate synthetic subjects; and governments and NGOs are pushing for technical and legal safeguards. For anyone working in tech policy, child-protection, trust & safety, or platform governance, the article highlights where immediate attention is required.
Source
Source: https://www.wired.com/story/people-are-using-sora-2-to-make-child-fetish-content/
