
The Center for Humane Technology argues that applying traditional product liability to AI — treating chatbots and companion apps as products, not services — is a practical, innovation‑friendly way to force safer design, create legal accountability, and mitigate mounting harms like manipulation and emotional distress. Courts are increasingly willing to view AI as a product, and momentum is building in Washington and state legislatures (notably the bipartisan AI LEAD Act) to codify liability standards. If adopted, product liability would incentivize default safety features, clearer risk disclosures and reporting, and provide consumers and businesses clearer pathways to seek redress, fundamentally shifting incentives in the tech industry.
Comments
Want to join the conversation?