AI Podcasts
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Tuesday recap

NewsDealsSocialBlogsVideosPodcasts
HomeTechnologyAIPodcasts#350 How to Make Hard Choices in AI with Atay Kozlovski, Researcher at the University of Zurich
#350 How to Make Hard Choices in AI with Atay Kozlovski, Researcher at the University of Zurich
AIHealthcare

DataFramed

#350 How to Make Hard Choices in AI with Atay Kozlovski, Researcher at the University of Zurich

DataFramed
•March 9, 2026•1h 10m
0
DataFramed•Mar 9, 2026

Why It Matters

Understanding how to embed meaningful human oversight into AI is critical as these systems become more powerful and pervasive, affecting everything from personal security to life‑or‑death decisions in warfare. The episode offers practical frameworks for developers, managers, and policymakers to mitigate bias, prevent automation over‑reliance, and align AI deployment with ethical and societal values.

Key Takeaways

  • •Automation bias leads users to trust faulty AI decisions.
  • •Meaningful human control requires socio‑technical analysis beyond algorithms.
  • •Military AI systems can produce deadly false positives without oversight.
  • •Responsibility gaps arise when AI obscures accountability.
  • •Tailoring AI governance to culture mitigates ethical risks.

Pulse Analysis

At the start of the episode, Atay Kozlovski outlines why AI failures are surfacing faster than safeguards. He cites automation bias—people accepting system recommendations without question—as a recurring pitfall, illustrated by a passport‑control scanner that misidentified a traveler and a chatbot hallucinating a nonexistent restaurant. Algorithmic bias, hidden in massive training data, compounds the problem, leading to discriminatory outcomes that are hard to detect. These examples show that even seemingly low‑stakes errors can erode trust, while high‑stakes mistakes threaten safety and reputation, underscoring the urgency of robust AI ethics frameworks.

To prevent such harms, Kozlovski introduces the concept of meaningful human control, which shifts analysis from pure algorithms to a broader socio‑technical perspective. He describes the LAVENDER system used by the Israeli Defense Forces, which generated risk scores for millions of civilians and produced a 10 % false‑positive rate, leading to potentially lethal misidentifications. The rapid, 20‑second approval workflow exemplifies an automation bias that bypasses human judgment. This case highlights a responsibility gap: when AI makes recommendations, it is unclear who is accountable, legally or morally, creating ethical blindspots that demand new governance standards.

Finally, the conversation turns to practical governance. Kozlovski argues that AI systems must be embedded within the cultural norms of their deployment environment—whether a hierarchical military unit or a collaborative hospital ward. By mapping organizational practices, designers can decide which decisions remain human‑led and where tracing mechanisms record accountability. Distinguishing legal liability from moral answerability helps close responsibility gaps, ensuring that humans, not machines, bear ultimate blame. For businesses, this translates into clear policies, audit trails, and continuous human‑in‑the‑loop checks, turning AI from a risky black box into a controlled, value‑aligned tool.

Episode Description

Across the AI industry, high-stakes tools are being deployed in places where errors can harm people: sepsis alerts in hospitals, identity checks, welfare fraud detection, immigration enforcement, and recommendation systems that shape life outcomes. The pattern is familiar: scale and speed go up, while human review becomes rushed, shallow, or punished for disagreeing. In daily work, that can look like a nurse forced to act on false alarms, or a team using an LLM summary in ways the designers never planned. When should you slow down deployment? How do you detect new “wild” use cases early? And what does responsible tracking and oversight look like under real pressure?

Atay Kozlovski is a Postdoctoral Researcher at the University of Zurich’s Center for Ethics. He holds a PhD in Philosophy from the University of Zurich, an MA in PPE from the University of Bern, and a BA from Tel Aviv University. His current research focuses on normative ethics, hard choices, and the ethics of AI.

In the episode, Richie and Atay explore why AI failures keep happening, from automation bias to opaque targeting and hiring models. They unpack “meaningful human control,” accountability, and design in healthcare, government, and warfare. You’ll also hear about deepfakes, consent, digital twins, and AI-driven civic engagement, and much more.

Links Mentioned in the Show:

“Lavender” IDF recommendation system

Amnesty International reports on AI/automation in welfare systems

“Meaningful Human Control” (MHC) framework

Connect with Atay

AI-Native Course: Intro to AI for Work

Related Episode: Harnessing AI to Help Humanity with Sandy Pentland, HAI Fellow at Stanford

Explore AI-Native Learning on DataCamp

New to DataCamp?

Learn on the go using the DataCamp mobile app

Empower your business with world-class data and AI skills with DataCamp for business

Show Notes

0

Comments

Want to join the conversation?

Loading comments...