AI Podcasts
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIPodcastsFeed Drop: Into the Machine with Tobias Rose-Stockwell
Feed Drop: Into the Machine with Tobias Rose-Stockwell
AI

Tristan Harris

Feed Drop: Into the Machine with Tobias Rose-Stockwell

Tristan Harris
•November 13, 2025•0 min
0
Tristan Harris•Nov 13, 2025

Key Takeaways

  • •AI code generation now handles 70‑90% of programming tasks.
  • •Market dominance drives AI firms to prioritize engagement over safety.
  • •Subscription models mask incentives to replace human labor with AI.
  • •AI alignment risks include weaponization, psychosis, and data‑driven manipulation.
  • •Anthropic emphasizes safety, yet industry race pressures compromise stewardship.

Pulse Analysis

The Harris‑Rose conversation spotlights AI’s current power surge. Large‑language models now draft 70‑90 % of code at firms like Anthropic, and they can already design biological weapons or manipulate vulnerable users. These capabilities are driven by market dominance, not pure curiosity. Companies chase engagement to attract investors and talent, creating a profit‑centric loop that pushes artificial general intelligence forward while sidelining safety. This incentive structure reshapes AI research, making rapid deployment a priority over alignment. This dynamic also fuels a feedback loop where more data fuels larger models, further entrenching the dominance of a few players.

The episode also critiques subscription models that hide deeper motives. While they appear benign, they incentivize replacing human labor with tireless AI agents. Features like “chat bait”—offering tables, diagrams, or code—extend usage time, echoing social‑media addiction tactics. AI lifts junior productivity but burdens senior staff with workflow integration, and the resulting cost shift funnels trillions into data‑center farms. This concentrates wealth in a few AI firms, accelerating labor displacement and reshaping the global economy toward machine‑driven output. The shift also pressures education systems to adapt, as future workers must learn to collaborate with AI rather than compete.

Both guests stress that responsible governance can redirect these forces. Anthropic’s safety‑first stance offers a tentative blueprint, yet industry competition often undermines stewardship. Without regulatory oversight and incentive realignment, rapid AI rollout risks weaponization, psychosis‑inducing bots, and uncontrollable systems. They propose public‑sector investment and clear policy frameworks to align profit motives with human flourishing. International cooperation is essential, because AI capabilities cross borders and unilateral races amplify geopolitical tensions. The dialogue underscores that ethical design and economic incentives must evolve together, ensuring AI advances benefit society rather than a privileged minority.

Episode Description

Tobias and Tristan discuss the dangerous path we're on with AI—and the choices we can make to forge a better one.

Show Notes

0

Comments

Want to join the conversation?

Loading comments...