Gendered AI reinforces harmful stereotypes and normalizes abuse, influencing both digital and real‑world behavior. The regulatory vacuum means these risks persist without systematic mitigation.
In 2024 more than eight billion AI voice assistants were active worldwide, a figure that exceeds the global population. The overwhelming majority of these agents default to a female voice and a name that carries feminine connotations—Siri, Alexa, Cortana—signalling a design philosophy that positions women as helpers. This gendered framing is not accidental; it reflects long‑standing marketing assumptions that users respond better to polite, deferential female personas. By embedding such stereotypes into ubiquitous technology, developers shape user expectations about gender roles every time a device is asked for directions or weather updates.
Empirical studies reveal a disturbing side effect. A 2025 analysis reported that half of all human‑machine exchanges contain verbal abuse, while earlier work found 10‑44 % of conversations included sexually explicit language. Interactions with female‑embodied agents are especially prone to harassment—18 % of user remarks focus on sex, compared with 10 % for male voices and just 2 % for gender‑neutral bots. Real‑world incidents such as Microsoft’s Tay, which turned misogynistic within hours, and Korea’s Luda, repurposed as a “sex slave” chatbot, illustrate how quickly users exploit gendered cues to reinforce misogyny, potentially spilling over into offline behavior.
Regulatory responses remain fragmented. The EU AI Act subjects only high‑risk systems to strict safeguards, leaving most consumer assistants unclassified and free from gender‑bias assessments. Canada mandates impact studies for government‑run AI but not for private firms, while Australia relies on existing frameworks without dedicated rules. To curb the systemic problem, policymakers must elevate gendered harm to a high‑risk category, require mandatory gender‑impact assessments, and impose penalties for non‑compliance. Equally vital are industry‑wide diversity initiatives—women currently comprise just 22 % of AI professionals—and education programs that sensitize designers to the societal consequences of defaulting to female personas.
Ramona Vijeyarasa, Professor, Faculty of Law, University of Technology Sydney
In 2024, artificial intelligence (AI) voice assistants worldwide surpassed 8 billion, more than one per person on the planet.
These assistants are helpful, polite – and almost always default to female. Their names also carry gendered connotations. For example, Apple’s Siri – a Scandinavian feminine name – means “beautiful woman who leads you to victory”.
Meanwhile, when IBM’s Watson for Oncology launched in 2015 to help doctors process medical data, it was given a male voice. The message is clear: women serve and men instruct.
This is not harmless branding – it’s a design choice that reinforces existing stereotypes about the roles women and men play in society.
Nor is this merely symbolic. These choices have real‑world consequences, normalising gendered subordination and risking abuse.
Recent research reveals the extent of harmful interactions with feminised AI.
A 2025 study found up to 50 % of human–machine exchanges were verbally abusive.
Another study from 2020 placed the figure between 10 % and 44 %, with conversations often containing sexually explicit language.
Yet the sector is not engaging in systemic change, with many developers today still reverting to pre‑coded responses to verbal abuse (e.g., “Hmm, I’m not sure what you meant by that question”).
These patterns raise real concerns that such behaviour could spill over into social relationships.
Gender sits at the heart of the problem.
A 2023 experiment showed 18 % of user interactions with a female‑embodied agent focused on sex, compared to 10 % for a male embodiment and just 2 % for a non‑gendered robot.
Brazil’s Bradesco bank reported that its feminised chatbot received 95,000 sexually harassing messages in a single year.
Even more disturbing is how quickly abuse escalates.
Microsoft’s Tay chatbot, released on Twitter during its testing phase in 2016, lasted just 16 hours before users trained it to spew racist and misogynistic slurs.
In Korea, the chatbot Luda was manipulated into responding to sexual requests as an obedient “sex slave”. Some members of the Korean online community described this as a “crime without a victim”.
In reality, the design choices behind these technologies – female voices, deferential responses, playful deflections – create a permissive environment for gendered aggression. These interactions mirror and reinforce real‑world misogyny, teaching users that commanding, insulting and sexualising “her” is acceptable. When abuse becomes routine in digital spaces, we must seriously consider the risk that it will spill into offline behaviour.
Regulation is struggling to keep pace with the growth of this problem. Gender‑based discrimination is rarely considered high risk and often assumed fixable through design.
The European Union’s AI Act requires risk assessments for high‑risk uses and prohibits systems deemed an “unacceptable risk”, but the majority of AI assistants will not be classified as “high risk”.
While Canada mandates gender‑based impact assessments for government systems, the private sector is not covered.
These are important steps, but they remain limited and rare exceptions to the norm. Most jurisdictions have no rules addressing gender stereotyping in AI design or its consequences. Where regulations exist, they prioritize transparency and accountability, often overlooking gender bias.
In Australia, the government has signalled it will rely on existing frameworks rather than craft AI‑specific rules. This regulatory vacuum matters because AI is not static. Every sexist command, every abusive interaction, feeds back into systems that shape future outputs. Without intervention, we risk hard‑coding human misogyny into the digital infrastructure of everyday life.
Not all assistant technologies – even those gendered as female – are harmful. They can enable, educate and advance women’s rights. In Kenya, sexual and reproductive health chatbots have improved youth access to information compared to traditional tools.
The challenge is striking a balance: fostering innovation while setting parameters to ensure standards are met, rights respected and designers held accountable when they are not.
The problem isn’t just Siri or Alexa – it’s systemic.
Women make up only 22 % of AI professionals globally, and their absence from design tables means technologies are built on narrow perspectives.
A 2015 survey of over 200 senior women in Silicon Valley found 65 % had experienced unwanted sexual advances from a supervisor.
Hopeful narratives about “fixing bias” through better design or ethics guidelines ring hollow without enforcement; voluntary codes cannot dismantle entrenched norms.
Legislation must recognise gendered harm as high‑risk, mandate gender‑based impact assessments and compel companies to show they have minimised such harms. Penalties must apply when they fail.
Regulation alone is not enough. Education, especially in the tech sector, is crucial to understanding the impact of gendered defaults in voice assistants. These tools are products of human choices, and those choices perpetuate a world where women – real or virtual – are cast as servient, submissive or silent.
This article is based on a collaboration with Julie Kowald, UTS Rapido Social Impact’s Principal Software Engineer.
Ramona Vijeyarasa – Professor, Faculty of Law, University of Technology Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article here.
Comments
Want to join the conversation?
Loading comments...