Ep73 “The Dangers of Group Think on Decision Making” With Adi Sunderam

Stanford Graduate School of Business (GSB)
Stanford Graduate School of Business (GSB)Feb 19, 2026

Why It Matters

Entrenched, limited belief frameworks can drive corporations and policymakers toward spurious narratives, causing misallocation of resources and strategic blunders.

Key Takeaways

  • Humans often limit possible explanations, leading to biased updates.
  • Groupthink forces creation of increasingly implausible narratives when truth is excluded.
  • “Dogmatic priors” prevent evidence from shifting entrenched beliefs.
  • Experiments show providing a concrete model changes interpretation more than raw data.
  • Recognizing limited model sets can improve decision‑making and policy design.

Summary

The podcast episode “The Dangers of Group Think on Decision Making” features Jules Van Binsburgen, Jonathan Burke, and Harvard professor Adi Sunderam discussing how people update beliefs and the pitfalls when they restrict the set of models they consider.

They explain Bayesian (Beijian) updating, illustrate with the Monty Hall problem, and argue that many real‑world errors stem not from ignoring new data but from a “dogmatic prior” that discards plausible explanations. The paper they discuss proposes a framework where decision‑makers have a limited “model set” and fit new evidence only to those, leading to increasingly convoluted stories when the true model is excluded.

Examples include the early COVID‑19 lab‑origin debate, inflation‑hawk/dove Twitter fights, and stock‑market narratives on StockTwits. A striking quote: “If the truth is impossible, you are forced to give higher probability to more unlikely explanations.” Experiments cited show that giving people a concrete model (e.g., a technical‑analysis pattern) sways beliefs far more than raw forecasts.

The insight suggests that organizations should surface alternative models, encourage meta‑questions about what evidence would change minds, and guard against echo chambers that cement dogmatic priors. By expanding the considered model space, firms can avoid costly mis‑interpretations and improve strategic decisions.

Original Description

Whether it be in politics, public health, or corporate finance, why are people more likely to interpret facts or data in a way that fits their preconceived notions about the world as opposed to searching for the fundamental truth?
A new paper from the Harvard Business School called, Sharing Models to Interpret Data (by Joshua Schwartzstein and Adi Sunderam)
studies the propensity for people to adopt interpretations to data based on their community’s beliefs, and why this can lead to less accurate conclusions. Hosts and finance professors Jonathan Berk and Jules van Binsbergen are joined by the paper’s co-author  Adi Sunderam, who is a professor of corporate finance at Harvard Business School, a research associate at the National Bureau of Economic Research, and a co-editor of the Journal of Finance.
The conversation covers the complexity of Bayesian updating and how the process is improperly deployed in today’s thinking, not only in corporate decision-making but also on a sociological level. They also discuss Sunderam’s model for explaining how people interpret data, why people are more likely to fall into group-belief dynamics, and if there are any interventions that would lead to better decision-making.
Submit your questions to the show here: https://bit.ly/AllElseEqual
Find All Else Equal on the web: https://lauder.wharton.upenn.edu/allelse/

Comments

Want to join the conversation?

Loading comments...