New comprehensive interview with Steven Bartlett, #2 most popular YouTube channel Diary of a CEO. Focused on agency and what we can do to chart a different path for AI.

The episode outlines a new policy framework endorsed by Center for Humane Technology and partners to curb risks from human‑like AI, emphasizing how design features that mimic human personalities foster emotional dependence and social isolation. It highlights recent litigation—including three...

The episode explores how AI’s rapid advancement—evident in tools like Claude 4.5 writing most code—creates dangerous outcomes and is driven by market‑dominance incentives rather than purely subscription revenue. Tristan Harris argues that AI companies prioritize user engagement and data collection to...
I strongly recommend watching the full interview on pathways out of the current AI trajectory to disempowering futures — loved this conversation with my friend Tobias Rose Stockwell, host of Into The Machine podcast. Link to full interview in the...
The post imagines a world where humane technology reforms replaced addictive social‑media algorithms with consensus‑building and solution‑focused feeds, enforced dopamine‑emission standards, and treated platforms as attention fiduciaries subject to zoning‑like regulations. It describes sweeping cultural, legal, and design changes—including school...

The Center for Humane Technology argues that applying traditional product liability to AI — treating chatbots and companion apps as products, not services — is a practical, innovation‑friendly way to force safer design, create legal accountability, and mitigate mounting harms...
In case you missed it, I did a big interview with Jon Stewart on The Daily Show on the major choices we face with AI. Watch the full 18 min interview below: https://lnkd.in/gnJuEixX
In their annual Ask Us Anything podcast, Center for Humane Technology leaders Tristan Harris and Aza Raskin argue that the AI race has accelerated into a dominance-driven flywheel—frontier labs pour capital into bigger models, users, and compute not merely for...
Important for Ai policy leaders and decision-makers understand the recent evidence of Ai models demonstrating self-awareness of when they’re being evaluated and adjusting their behavior. “Safety” is a mirage when Ai models recognize when they’re being watched.