Reel Philosophy for Everyone 2: Mazviita Chirimuuta

The Royal Institute of Philosophy
The Royal Institute of PhilosophyApr 9, 2026

Why It Matters

Explainability gaps threaten trust and compliance in AI‑driven decisions, compelling firms to prioritize transparent models or risk regulatory and reputational fallout.

Key Takeaways

  • Explainable AI struggles because models lack built‑in reasoning rules.
  • Credit and legal decisions demand transparent logic, not just predictions.
  • Larger foundational models increase accuracy while deepening opacity.
  • Reverse‑engineering model internals becomes exponentially harder as size grows.
  • Predictive power alone cannot guarantee trust without understanding reliability.

Summary

The discussion centers on the growing difficulty of making artificial‑intelligence systems intelligible, especially as they are deployed in high‑stakes domains like credit scoring and the legal system. Participants highlight that modern models, particularly large foundational models, are trained to self‑organize from data without any explicit reasoning principles embedded by their creators.

Key insights reveal a tension between predictive performance and explainability. As models become more powerful, their internal dynamics grow opaque, forcing engineers to resort to costly reverse‑engineering efforts to infer why a particular decision was made. This creates a gap where predictions can be highly accurate yet lack any transparent justification, undermining confidence in critical applications.

A notable quote from the interview underscores the paradox: “We can have prediction without any understanding of what underpins the reliability of the prediction.” The speakers cite creditworthiness assessments and legal judgments as examples where stakeholders demand clear, auditable reasoning, not just black‑box outputs.

The implications are profound for businesses and regulators. Without robust explainability, organizations risk legal exposure, reputational damage, and reduced user trust. The conversation signals a pressing need for standards, tooling, and governance that bridge predictive power with understandable, accountable AI behavior.

Original Description

In this week’s #ReelPhilosophyForEveryone , Professor Harcourt asks Dr Mazviita Chirimuuta about how much we can know about the processes behind AI.
Watch the full video here: https://youtu.be/upalvKnBB7w

Comments

Want to join the conversation?

Loading comments...