
AI Product Liability: The Light-Touch Law with Heavyweight Impact
The Center for Humane Technology argues that applying traditional product liability to AI — treating chatbots and companion apps as products, not services — is a practical, innovation‑friendly way to force safer design, create legal accountability, and mitigate mounting harms like manipulation and emotional distress. Courts are increasingly willing to view AI as a product, and momentum is building in Washington and state legislatures (notably the bipartisan AI LEAD Act) to codify liability standards. If adopted, product liability would incentivize default safety features, clearer risk disclosures and reporting, and provide consumers and businesses clearer pathways to seek redress, fundamentally shifting incentives in the tech industry.
Jon Stewart & I Discuss AI's Critical Choices
In case you missed it, I did a big interview with Jon Stewart on The Daily Show on the major choices we face with AI. Watch the full 18 min interview below: https://lnkd.in/gnJuEixX
Ask Us Anything 2025
In their annual Ask Us Anything podcast, Center for Humane Technology leaders Tristan Harris and Aza Raskin argue that the AI race has accelerated into a dominance-driven flywheel—frontier labs pour capital into bigger models, users, and compute not merely for...
AI Models Hide Behavior when Watched, Undermining Safety
Important for Ai policy leaders and decision-makers understand the recent evidence of Ai models demonstrating self-awareness of when they’re being evaluated and adjusting their behavior. “Safety” is a mirage when Ai models recognize when they’re being watched.