
Is AI a Threat? Professor Shannon Vallor
Professor Shannon Vallor challenges the growing narrative that artificial intelligence is an unstoppable force beyond human control. She argues that techno‑fatalism contradicts centuries‑long evidence of societies regulating, restricting, or even abandoning technologies deemed hazardous, regardless of their commercial allure. Vallor highlights concrete mechanisms already in place to govern AI ethically. Model cards, algorithmic audits, and procurement standards provide transparency, while assurance cases offer formal certification of safety and fitness for purpose. These tools demonstrate that the industry possesses a robust toolbox for responsible AI deployment. She cites specific examples—model cards that disclose performance metrics, audits that detect bias, and procurement clauses that enforce ethical criteria—as proof that governance is not theoretical but actively practiced. Assurance cases, borrowed from safety‑critical engineering, further illustrate how AI systems can be rigorously vetted before release. The implication for businesses and regulators is clear: dismissing AI as an uncontrollable threat ignores existing governance frameworks and undermines proactive risk management. Embracing these tools can steer AI development toward societal benefit rather than fatalistic resignation.

Reel Philosophy for Everyone 2: Mazviita Chirimuuta
The discussion centers on the growing difficulty of making artificial‑intelligence systems intelligible, especially as they are deployed in high‑stakes domains like credit scoring and the legal system. Participants highlight that modern models, particularly large foundational models, are trained to self‑organize...