
Launching KORA, the First Public Benchmark for AI Child Safety
Summary
In this episode, Mathilde Collin discusses the launch of KORA, the first public benchmark for AI child safety, explaining how her background in tech and concern for youth mental health drove the initiative. She outlines the early challenges of defining safety criteria and aligning LLM judgments with human experts, and shares early impact as labs begin to prioritize child safety metrics. Key findings from the benchmark reveal that educational integrity is a major blind spot, with 76% of cheating‑related responses inadequate, and that models avoiding anthropomorphic behavior are significantly safer for children. Collin envisions KORA guiding industry standards and empowering parents and edtech companies to make safer AI choices over the next decade.
Launching KORA, the first public benchmark for AI child safety
Comments
Want to join the conversation?