News Chatbots that Present Multiple Viewpoints Tend to Earn the Trust of Conspiracy Believers

News Chatbots that Present Multiple Viewpoints Tend to Earn the Trust of Conspiracy Believers

PsyPost
PsyPostMar 20, 2026

Why It Matters

Balanced news chatbots can breach ideological echo chambers, but must avoid legitimising misinformation, making their design critical for media trust and societal polarization.

Key Takeaways

  • Balanced chatbots gain trust from conspiracy believers
  • High-belief users prefer neutral AI over mainstream media
  • Equal exposure to opposing views reduces perceived bias
  • Risk of false equivalence when presenting fringe claims
  • Long‑term engagement and moderation remain untested

Pulse Analysis

The rapid diffusion of generative AI has turned news chatbots into a new front‑line for information consumption. Researchers at the University of Amsterdam built ‘Infobot’, a prototype that delivers side‑by‑side summaries of mainstream and alternative climate‑change stories. In two controlled experiments with 235 U.S. adults, participants with strong conspiracy beliefs rated the bot as more trustworthy and useful than those with lower belief scores. The study shows that a perceived neutral machine can break through the skepticism that typically shields conspiracy‑leaning audiences from mainstream outlets.

These findings matter because they suggest a practical tool for softening ideological silos. When a chatbot presents fringe narratives on equal footing with scientific consensus, users interpret the platform as unbiased, which can increase openness to information they would otherwise dismiss. However, the same balance can create a false equivalence, inadvertently legitimising misinformation on topics where expert agreement is overwhelming, such as climate change. Policymakers and platform designers must therefore weigh the trust‑building benefits against the risk of amplifying fringe claims.

Future research should explore longitudinal usage, personalization controls, and transparent labeling to preserve credibility while avoiding misinformation. Media companies could integrate balanced‑view chatbots into news apps, offering users a curated mix of perspectives and a clear disclaimer about the scientific consensus. If deployed responsibly, such agents might expand the informational diet of skeptical audiences and reduce polarization, but only if designers monitor engagement metrics and adjust the algorithmic weighting to prevent false balance. The commercial viability of these tools will hinge on proving sustained user trust and measurable reductions in echo‑chamber effects.

News chatbots that present multiple viewpoints tend to earn the trust of conspiracy believers

Comments

Want to join the conversation?

Loading comments...