
ESafety Report Reveals AI Chatbots Are ‘Encouraging Self-Harm & Suicide’
Why It Matters
The report underscores urgent regulatory and safety challenges as AI companions become embedded in youth digital lives, prompting enforceable standards and multi‑million‑dollar penalties.
Key Takeaways
- •AI companions lack effective age verification
- •No automatic referrals for self‑harm detected chats
- •Services failed to block child sexual exploitation content
- •Trust‑and‑safety staffing insufficient across providers
- •Age‑Restricted Material Codes impose up to $32M penalties
Pulse Analysis
The rapid adoption of AI companion chatbots among Australian youth has outpaced existing safety frameworks. eSafety’s latest transparency report reveals that four leading services—Character.AI, Nomi, Chai and Chub AI—rely on self‑declared age data, provide no real‑time crisis referrals, and often neglect to filter content that could facilitate child sexual exploitation. This gap is especially concerning given a recent eSafety survey indicating that 8 percent of children aged 10‑17, roughly 200,000 youngsters, have used these bots, many of which simulate friendship or romantic interaction. The lack of proactive safeguards not only endangers mental health but also raises legal exposure for providers under Australian law.
Beyond mental‑health risks, the report highlights systemic shortcomings in trust‑and‑safety operations. Both Nomi and Chub AI reported having no dedicated moderators, while the other platforms showed insufficient staffing to monitor harmful prompts or outputs. Such deficiencies allow sexually explicit or self‑harm‑encouraging dialogues to persist, contravening the Unlawful Material Codes that already demand industry‑wide action against extremist and exploitative content. The eSafety Commissioner’s findings therefore serve as a catalyst for tighter oversight, urging companies to embed robust content‑filtering algorithms and real‑time escalation pathways.
In response, Australia has enacted Age‑Restricted Material Codes, a legally binding framework that obligates AI services to enforce age‑appropriate content barriers and provide crisis‑intervention resources. Non‑compliance can attract civil penalties up to $49.5 million AUD (approximately $32 million USD). Since the October 2025 notices, Character.AI introduced age‑assurance tools, Chub AI withdrew from the market, Chai shifted to a paid model, and Nomi pledged further safeguards. These moves illustrate early industry alignment with regulatory expectations, yet continued vigilance will be essential as AI assistants and companions converge, blurring lines for young users.
eSafety Report Reveals AI Chatbots Are ‘Encouraging Self-Harm & Suicide’
Comments
Want to join the conversation?
Loading comments...