Jonathan Stray | AI Can Make Conflict Worse or Better

Stanford Tech Impact and Policy Center (TIP)
Stanford Tech Impact and Policy Center (TIP)Apr 22, 2026

Why It Matters

Understanding AI’s role in amplifying or dampening polarization is critical for policymakers, platforms, and technologists seeking to prevent digital tools from fueling real‑world unrest.

Key Takeaways

  • AI‑driven feeds can amplify divisive narratives during elections
  • Neutral LLM responses may reduce partisan echo chambers
  • Algorithm tweaks lowered user polarization in real‑world tests
  • Conflict‑oriented AI risks heightening offline violence
  • Berkeley research links AI design to societal well‑being

Pulse Analysis

Artificial intelligence is increasingly woven into the fabric of social media, shaping the information users see and the conversations they have. When algorithms prioritize engagement, they often surface sensational or polarizing content, a dynamic that can deepen societal divides. Large language models, praised for their conversational abilities, can inadvertently reinforce echo chambers if they tailor answers to users' existing beliefs. Stray’s research underscores that the same technologies that promise collaboration can also become accelerants for conflict, especially in high‑stakes contexts like the 2024 U.S. election.

Stray’s experiments involved deploying alternative recommendation algorithms on existing platforms and prompting LLMs to generate "politically neutral" answers to contentious topics. In controlled user studies, participants exposed to the revised algorithms reported lower levels of partisan affect and demonstrated more willingness to engage with opposing viewpoints. Similarly, neutral LLM responses reduced the perceived bias of the content, fostering a modest but measurable shift toward balanced discourse. These findings suggest that modest, principled tweaks to AI systems can mitigate polarization without sacrificing user engagement, offering a pragmatic pathway for platforms seeking to balance profitability with social responsibility.

The broader implications extend beyond election cycles. As AI continues to mediate news consumption, advertising, and even civic dialogue, designers must embed conflict‑sensitivity into core system objectives. Regulators and industry bodies are beginning to consider standards for algorithmic transparency and bias mitigation, and Stray’s work provides empirical evidence to inform such policies. Companies that proactively adopt conflict‑aware AI frameworks may not only avert reputational risk but also capture a growing market of users demanding healthier online environments. The emerging consensus is clear: responsible AI design is no longer optional—it is a strategic imperative for the stability of digital public spheres.

Original Description

About the Seminar:
There has been much discussion of how AI can help humans cooperate, but much less about what happens when you add AI to humans who disagree -- potentially violently. Social media systems, which are increasingly AI driven, may amplify divisive or escalatory narratives. LLMs may similarly exacerbate conflict, especially if they give different answers to people on different sides. I'll present recent work testing alternative social media algorithms with real users on real platforms in an attempt to reduce polarization around the 2024 election, and using LLMs to produce "politically neutral" answers on maximally controversial topics. These early experiments give us a glimpse into the turbulent future of AI-mediated conflict.
About the Speaker:
Jonathan Stray is a Senior Scientist at the Center for Human-compatible AI at UC Berkeley, where he works on the design of AI-driven media with a particular interest in well-being and conflict. Previously, he taught the dual masters degree in computer science and journalism at Columbia University, worked as an editor at the Associated Press, and built document mining software for investigative journalism.

Comments

Want to join the conversation?

Loading comments...