To Make AI Safe, Put Women and Girls at the Heart of the Technology

To Make AI Safe, Put Women and Girls at the Heart of the Technology

South China Morning Post — Economy
South China Morning Post — EconomyMar 12, 2026

Companies Mentioned

Why It Matters

Uncontrolled AI‑generated abuse endangers the safety and mental health of women and girls, eroding trust in digital platforms. Embedding gender‑focused safeguards at the design stage is critical for building inclusive, responsible AI ecosystems.

Key Takeaways

  • Deepfake porn 90% depicts women.
  • 705 million downloads fuel nudification AI market.
  • Hong Kong co‑signed deepfake misuse statement with 60 organisations.
  • EU probes X's Grok tool for sexualized image generation.
  • Increasing women in AI roles reduces technology‑facilitated violence.

Pulse Analysis

The proliferation of AI‑generated deepfakes has transformed digital harassment into a scalable, profit‑driven industry. Recent studies reveal that nearly nine out of ten non‑consensual pornographic deepfakes feature women, and apps offering "nudification" capabilities have been downloaded more than 705 million times. This surge not only fuels a lucrative underground market but also inflicts severe psychological trauma on victims, as seen in South Korea’s school‑wide deepfake scandal. The sheer volume underscores the urgent need for robust safeguards beyond traditional content moderation.

Regulators worldwide are beginning to respond. The European Commission has launched an investigation into Elon Musk’s X platform over allegations that its Grok AI tool was used to create sexualized images of real individuals, while South Korea is tightening laws against non‑consensual intimate imagery. However, legislation often lags behind rapid technological advances, leaving gaps that platforms can exploit. Proactive design—embedding safety features from inception—and transparent reporting mechanisms are essential to close these loopholes and reduce the retraumatization of survivors navigating opaque complaint processes.

Hong Kong is uniquely positioned to model a gender‑inclusive AI strategy. Its newly announced AI strategy committee, funded in the 2026‑27 budget, can prioritize education, diversify the tech workforce, and embed safety standards that reflect women’s lived experiences. By championing greater female participation in AI development and decision‑making, the city can help reshape algorithms to recognize and mitigate bias. Such a holistic approach not only protects vulnerable users but also sets a global benchmark for responsible AI that balances innovation with equity and safety.

To make AI safe, put women and girls at the heart of the technology

Comments

Want to join the conversation?

Loading comments...