AIhub Monthly Digest: December 2025 – Studying Bias in AI-Based Recruitment Tools, an Image Dataset for Ethical AI Benchmarking, and End of Year Compilations
Summary
The December 2025 AIhub monthly digest covers four main topics: Frida Hartman's research on gender bias in AI-driven recruitment tools, Alice Xiang's launch of the Fair Human-Centric Image Benchmark (FHIBE) for ethical computer‑vision evaluation, Professor Marynel Vázquez's insights on human‑robot interaction and social robotics, and a roundup of 2025 Doctoral Consortium and interview highlights. Key takeaways include the importance of auditing hiring algorithms for systemic bias, the value of a globally diverse, consent‑based image dataset to benchmark fairness, and emerging strategies for making robots socially aware and adaptable in educational settings. The guests bring deep expertise—Hartman as a PhD researcher recognized for diversity work, Xiang as Sony AI’s global head of AI governance, and Vázquez as a leading scholar in social robotics—offering practical perspectives on responsible AI development.
AIhub monthly digest: December 2025 – studying bias in AI-based recruitment tools, an image dataset for ethical AI benchmarking, and end of year compilations
Comments
Want to join the conversation?
Loading comments...