AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsPeople Are Using Sora 2 to Make Disturbing Videos With AI-Generated Kids
People Are Using Sora 2 to Make Disturbing Videos With AI-Generated Kids
AI

People Are Using Sora 2 to Make Disturbing Videos With AI-Generated Kids

•December 22, 2025
0
WIRED AI
WIRED AI•Dec 22, 2025

Companies Mentioned

OpenAI

OpenAI

TikTok

TikTok

Facebook

Facebook

Google

Google

GOOG

YouTube

YouTube

Patreon

Patreon

Why It Matters

The proliferation of AI‑generated child‑focused fetish content threatens child safety and tests the limits of current CSAM regulations, urging faster policy and moderation responses.

Key Takeaways

  • •Sora 2 generates AI videos depicting minors in sexualized contexts
  • •UK AI‑CSAM reports doubled year‑over‑year
  • •New Crime and Policing Bill amendment targets AI tools
  • •OpenAI bans accounts but enforcement remains inconsistent
  • •TikTok removed some videos, many still online

Pulse Analysis

The rapid emergence of AI‑driven video tools like OpenAI's Sora 2 has reshaped the landscape of digital content creation, but it also introduces a dark side. Within days of its limited release, users began producing hyper‑realistic commercials that sexualize children, exploiting the model's ability to blend photorealistic faces with suggestive narratives. This trend underscores a broader industry challenge: generative AI can outpace existing moderation frameworks, allowing harmful material to slip through before platforms can react.

Regulators are scrambling to close the loopholes exposed by these AI‑generated clips. In the United Kingdom, the Internet Watch Foundation reported a more than two‑fold increase in AI‑CSAM incidents, prompting an amendment to the Crime and Policing Bill that mandates testing AI tools for illicit output. Across the United States, 45 states have enacted laws criminalizing AI‑generated child sexual abuse material, reflecting a growing consensus that traditional legal definitions must evolve alongside technology. These policy shifts aim to create a legal deterrent, but enforcement hinges on the cooperation of AI developers and social media platforms.

For AI providers like OpenAI, the dilemma lies in balancing open innovation with robust safeguards. While OpenAI has instituted consent‑based facial embedding and bans on child exploitation, creators continue to find workarounds, highlighting the need for more nuanced moderation, diverse review teams, and real‑time detection mechanisms. Platforms such as TikTok are also tightening their minor‑safety policies, yet many offending videos remain accessible. The ongoing tug‑of‑war between creative freedom, commercial interests, and child protection will shape the future of AI governance, demanding coordinated action from policymakers, tech firms, and civil society.

People Are Using Sora 2 to Make Disturbing Videos With AI-Generated Kids

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...