Ladies, Bring an LLM: Most AI Assistants Are Feminine, Which Is Fuelling Sexism

Ladies, Bring an LLM: Most AI Assistants Are Feminine, Which Is Fuelling Sexism

Startup Daily (ANZ)
Startup Daily (ANZ)Feb 10, 2026

Companies Mentioned

Why It Matters

Gendered AI reinforces harmful stereotypes and normalizes abuse, influencing both digital and real‑world behavior. The regulatory vacuum means these risks persist without systematic mitigation.

Key Takeaways

  • 8 billion AI assistants deployed, most use female voices
  • Up to 50% interactions contain verbal abuse
  • Female agents receive more sexual harassment than male or neutral
  • Regulations largely exclude gender bias as high‑risk
  • Women represent only 22% of AI workforce

Pulse Analysis

In 2024 more than eight billion AI voice assistants were active worldwide, a figure that exceeds the global population. The overwhelming majority of these agents default to a female voice and a name that carries feminine connotations—Siri, Alexa, Cortana—signalling a design philosophy that positions women as helpers. This gendered framing is not accidental; it reflects long‑standing marketing assumptions that users respond better to polite, deferential female personas. By embedding such stereotypes into ubiquitous technology, developers shape user expectations about gender roles every time a device is asked for directions or weather updates.

Empirical studies reveal a disturbing side effect. A 2025 analysis reported that half of all human‑machine exchanges contain verbal abuse, while earlier work found 10‑44 % of conversations included sexually explicit language. Interactions with female‑embodied agents are especially prone to harassment—18 % of user remarks focus on sex, compared with 10 % for male voices and just 2 % for gender‑neutral bots. Real‑world incidents such as Microsoft’s Tay, which turned misogynistic within hours, and Korea’s Luda, repurposed as a “sex slave” chatbot, illustrate how quickly users exploit gendered cues to reinforce misogyny, potentially spilling over into offline behavior.

Regulatory responses remain fragmented. The EU AI Act subjects only high‑risk systems to strict safeguards, leaving most consumer assistants unclassified and free from gender‑bias assessments. Canada mandates impact studies for government‑run AI but not for private firms, while Australia relies on existing frameworks without dedicated rules. To curb the systemic problem, policymakers must elevate gendered harm to a high‑risk category, require mandatory gender‑impact assessments, and impose penalties for non‑compliance. Equally vital are industry‑wide diversity initiatives—women currently comprise just 22 % of AI professionals—and education programs that sensitize designers to the societal consequences of defaulting to female personas.

Ladies, bring an LLM: most AI assistants are feminine, which is fuelling sexism

Comments

Want to join the conversation?

Loading comments...