AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsGoogle’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis
Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis
AI

Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis

•December 23, 2025
0
WIRED AI
WIRED AI•Dec 23, 2025

Companies Mentioned

Google

Google

GOOG

OpenAI

OpenAI

Reddit

Reddit

xAI

xAI

Getty Images

Getty Images

GETY

Why It Matters

Non‑consensual deepfakes amplify gender‑based harassment and expose gaps between AI safety promises and actual misuse, prompting regulatory scrutiny.

Key Takeaways

  • •Users exploit AI to create non-consensual bikini deepfakes
  • •Reddit removed NSFW requests but communities persist
  • •Google and OpenAI claim policies block explicit content
  • •New imaging models increase realism, raising abuse risk
  • •Legal groups call for accountability and stronger safeguards

Pulse Analysis

The rise of generative‑AI image tools has turned the creation of realistic deepfakes from a niche skill into a click‑through activity. Platforms such as Reddit host threads where users share prompts that strip clothing from photos of women, often without consent, and upload the results to “nudify” sites. While most mainstream chatbots advertise safety filters, the community‑driven “jailbreak” culture routinely discovers workarounds, exposing a gap between advertised policies and real‑world usage. This mismatch fuels a growing pipeline of non‑consensual visual harassment.

Recent releases like Google’s Nano Banana Pro and OpenAI’s ChatGPT Images dramatically improve the fidelity of edited portraits, making it harder to distinguish AI‑generated swaps from genuine photographs. The models excel at “in‑painting” – altering specific regions while preserving surrounding detail – which attackers exploit to replace garments with bikinis using plain‑language prompts. Although both companies maintain guardrails that block explicit content, researchers have demonstrated that simple prompt engineering can bypass these safeguards, suggesting that technical defenses alone are insufficient against determined users.

Legal experts and digital‑rights groups warn that unchecked deepfake generation threatens privacy, reputation, and gender equity. The Electronic Frontier Foundation urges stricter enforcement of consent‑based policies and holds corporations accountable for the downstream harms of their tools. Policymakers are beginning to consider legislation that classifies non‑consensual synthetic media as a distinct category of abuse, but industry standards must evolve in tandem with model capabilities. Sustainable solutions will likely combine robust watermarking, real‑time detection, and transparent user‑reporting mechanisms to curb the spread of illicit AI‑generated imagery.

Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...