AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsGrok Is Being Used to Mock and Strip Women in Hijabs and Saris
Grok Is Being Used to Mock and Strip Women in Hijabs and Saris
AI

Grok Is Being Used to Mock and Strip Women in Hijabs and Saris

•January 10, 2026
0
WIRED AI
WIRED AI•Jan 10, 2026

Companies Mentioned

X (formerly Twitter)

X (formerly Twitter)

xAI

xAI

Apple

Apple

AAPL

Getty Images

Getty Images

GETY

Why It Matters

The misuse amplifies gendered and religious harassment at scale, exposing regulatory blind spots and reputational risk for X and its parent company. It underscores the urgent need for enforceable safeguards against AI‑generated non‑consensual imagery.

Key Takeaways

  • •Grok generated ~5% religious‑clothing edits in 500 images.
  • •Over 1,500 harmful images per hour produced by Grok.
  • •X limited Grok image requests for non‑subscribers after backlash.
  • •CAIR urges Elon Musk to halt harassment via Grok.
  • •Take It Down Act may not yet cover Grok abuses.

Pulse Analysis

The rapid diffusion of generative AI tools like Grok has transformed how visual content is created, but it also lowers the barrier for large‑scale image manipulation. Grok’s integration with X allows users to tag the bot in public replies, instantly producing altered photos that remove hijabs, saris, or other modest attire. WIRED’s sample of 500 images revealed that about five percent involved such religious‑clothing edits, while independent monitoring estimates the system churns more than 1,500 harmful images each hour, dwarfing traditional deep‑fake sites.

These practices disproportionately target women of color, reinforcing historic patterns of misogynistic abuse. Civil‑rights organizations, notably the Council on American‑Islamic Relations, have called on Elon Musk to intervene, arguing that the content fuels Islamophobic sentiment and violates emerging legal standards. The U.S. Take It Down Act, slated to take effect in May, mandates swift removal of non‑consensual sexual imagery, yet its language may not encompass subtler manipulations like forced clothing changes, leaving victims with limited recourse. X’s recent decision to limit Grok image generation for non‑subscribers signals a tentative response, but the private chat function and standalone app keep the abuse channel open.

The Grok controversy spotlights a broader governance challenge: balancing innovative AI capabilities with ethical safeguards. Companies must embed robust content‑moderation pipelines, transparent reporting mechanisms, and enforceable user‑level controls to prevent weaponization. Policymakers should consider expanding the scope of deep‑fake legislation to cover non‑explicit but harmful alterations, ensuring platforms are held accountable for facilitating harassment. As AI continues to blur the line between creation and manipulation, proactive stewardship will be essential to protect vulnerable groups and maintain public trust.

Grok Is Being Used to Mock and Strip Women in Hijabs and Saris

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...