Could AI Chatbots Undo the Harms of Social Media | FT #shorts
Why It Matters
AI chatbots’ ability to temper extreme views offers companies a tool to improve brand safety and public dialogue, while signaling a shift toward more trustworthy digital information ecosystems.
Key Takeaways
- •AI chatbots tend to nudge users toward moderate viewpoints.
- •Social media thrives on sensationalism, while AI services prioritize accuracy.
- •Study of thousands of AI responses shows reduced extremism and conspiracy endorsement.
- •Different chatbot platforms vary subtly but all depolarize conversations.
- •Preliminary evidence suggests AI could counteract social media’s polarizing effects.
Summary
The video argues that the next information revolution—AI chatbots—could reverse the corrosive trends of the past fifteen years, marked by populism, polarization, and dwindling trust in experts, by reshaping how people receive and discuss information.
Researchers analyzed tens of thousands of AI‑generated answers to policy‑related questions, finding that chatbots consistently steer users away from the extreme, sensationalist positions amplified on platforms like TikTok, nudging them toward moderate, expert‑aligned viewpoints. The data also show a markedly lower incidence of conspiracy‑theory endorsement compared with social‑media content.
As the narrator notes, “Social media firms make money from attention, rewarding sensationalism, whereas AI firms compete to serve paying customers with accurate, reliable tools.” This distinction underpins the observed depolarizing effect, even though individual chatbot models exhibit subtle behavioral differences.
If these preliminary findings hold, businesses and policymakers could leverage AI assistants to foster more reasoned public discourse, potentially mitigating the reputational and regulatory risks associated with polarizing online environments.
Comments
Want to join the conversation?
Loading comments...