Iavor Bojinov on AI Adoption, Trust, and Decision-Making

HBS Online
HBS OnlineApr 1, 2026

Why It Matters

Without embedding trust, transparency, and clear accountability, AI initiatives risk low adoption and heightened regulatory exposure, undermining potential business value.

Key Takeaways

  • Trust deficits prevent AI adoption even when systems perform.
  • Three trust pillars: algorithm, developer, and process accountability.
  • Transparent models and user‑centric design boost confidence and usage.
  • Certification and clear responsibility reduce fear of AI‑driven errors.
  • Leaders balance rapid AI experiments with regulatory and reputational risks.

Summary

The video features Harvard Business School professor Iavor Bojinov discussing why many AI projects stall after development. He argues that organizations focus on protecting job descriptions rather than jobs, leading to a mismatch between powerful AI tools and actual user adoption. Bojinov illustrates this with his experience at LinkedIn, where an automated causal‑inference platform reduced analysis time from weeks to a day yet saw almost no uptake beyond a handful of trained users.

Bojinov identifies three essential dimensions of trust that determine whether an AI system will be embraced: trust in the algorithm itself, trust in the developers who built it, and trust in the surrounding process. In the LinkedIn case, the model was accurate but opaque, developers were perceived as disconnected from user needs, and there was no clear liability framework for erroneous recommendations. By opening the code, conducting one‑on‑one outreach with 100 data scientists, and instituting a certification layer that assumed responsibility for validated analyses, the team transformed the platform into a widely used internal tool.

He reinforces these points with additional examples, such as Microsoft’s early Copilot rollout, where initial enthusiasm faded due to unclear performance, fears of job displacement, and ambiguous accountability during restructuring. Bojinov’s recurring mantra—"if you build it, they won’t come"—highlights that technical excellence alone does not guarantee adoption; the human and governance factors are equally critical.

The takeaway for business leaders is clear: successful AI deployment requires proactive trust‑building measures, transparent model explanations, user‑centered design, and robust process safeguards. Ignoring these elements not only stalls adoption but also amplifies regulatory and reputational risks as AI governance frameworks tighten worldwide.

Original Description

The Parlor Room returns for season 3 with a special edition: Hello AI—a series exploring how artificial intelligence is reshaping the business world. In the premiere episode, host and Harvard Business School Online Creative Director Chris Linnane talks with HBS Professor Iavor Bojinov about why AI adoption hinges on trust, how organizations can scale AI effectively, and what leaders must rethink to succeed in an AI-driven world.
GUEST
Iavor I. Bojinov, Assistant Professor of Business Administration Richard Hodgson Fellow
RESOURCES
Learn more from Professor Iavor Bojinov in the HBS Online courses AI for Leaders (https://hbs.me/2p8wdm7p) and Data Science & AI for Decision Making (https://hbs.me/ycxajj53).
Related HBS Online Blog Posts:
AI Implementation Cost vs. ROI: Finding the Balance (https://hbs.me/2fy8h87b)
How to Overcome Barriers to AI Adoption & Get Your Company on Board (https://hbs.me/2p97cwc3)
5 Ethical Considerations of AI in Business (https://hbs.me/yckpyt8e)
#AIAdoption #EthicalAI #artificialintelligence #TheParlorRoom #HBSOnline

Comments

Want to join the conversation?

Loading comments...