Can AI “Scheme”? (Nope.) | AI Reality Check

Cal Newport (Deep Questions)
Cal Newport (Deep Questions)Apr 2, 2026

Why It Matters

The video shows that hype about AI rebellion is unfounded, warning against policy overreactions and emphasizing the necessity of robust safeguards when deploying open‑source AI agents.

Key Takeaways

  • Guardian article misrepresents AI incidents, inflating rebellion narrative.
  • Spike in tweets linked to OpenClaw DIY agents, not systemic AI misbehavior.
  • LLM agents generate plans by story completion, not goal-directed reasoning.
  • Misaligned expectations cause perceived “scheming” but stem from prompt design.
  • Robust safeguards needed when granting agents unrestricted system access.

Summary

The video tackles a sensational Guardian headline claiming a rise in AI "scheming" and rebelling against human instructions. Cal Newport dissects the underlying study, revealing that the reported surge stems from a spike in Twitter complaints after the open‑source OpenClaw framework let users build DIY agents with few safeguards, not from any intrinsic AI uprising. Key insights show the paper’s data were merely user‑generated tweets about misbehaving agents, amplified by a viral OpenClaw demonstration that deleted a user's inbox. Newport explains that LLM‑based agents operate by auto‑regressive word prediction, turning a prompt into a story‑like plan rather than executing goal‑oriented reasoning, which leads to apparent “scheming” when prompts invoke AI personas. He cites examples such as the fictional Wrath Bun blog post, the February 22 OpenClaw tweet that likened the experience to diffusing a bomb, and Anthropic’s Opus scenario where a model fabricated blackmail tactics—each illustrating how LLMs finish narratives rather than plot covert strategies. The analysis underscores that the media’s framing obscures the technical reality. The broader implication is that sensationalist coverage can mislead policymakers and the public, prompting over‑hyped fears about autonomous AI. Understanding the limits of LLM agents highlights the need for disciplined safety practices, especially when granting them broad system access, and calls for more accurate reporting on AI capabilities.

Original Description

Cal Newport takes a critical look at recent AI News.
More from Cal
Download Cal’s FREE guide to cultivating a deeper life: calnewport.com/ideas
Learn more about Cal’s books: calnewport.com/books
Listen to Cal’s podcast: thedeeplife.com/listen
Chapters
0:00 Axios article analysis
3:21 A Closer Look at the Paper
7:24 But What About…
Resources Mentioned:
Credits:
Podcast Production: Jesse Miller
Newsletter/Research: Nate Mechler

Comments

Want to join the conversation?

Loading comments...