
Grammarly Removes AI Feature Which Used Real Authors' Identities, Faces Class Action Lawsuit
Why It Matters
The case highlights growing legal and ethical pressures on AI firms to obtain explicit permission before using real individuals' identities, potentially reshaping industry standards for attribution and consent.
Key Takeaways
- •Grammarly removed Expert Review after identity misuse backlash.
- •Feature used living and deceased authors without permission.
- •Class-action filed by NYT writer alleges illegal name usage.
- •Lawsuit could set precedent for AI content attribution.
- •Grammarly plans redesign with opt‑in author control.
Pulse Analysis
Grammarly’s Expert Review was marketed as an AI assistant that could draw on the expertise of well‑known writers and scholars, even allowing users to pick specific names for personalized feedback. While the feature promised to elevate academic and professional writing, it effectively created synthetic endorsements by mimicking the voices of real authors, including deceased figures like Carl Sagan. This approach sparked immediate criticism from the literary community, who argued that the tool misrepresented expertise and violated personal rights, leading Grammarly to suspend the service and offer a belated opt‑out mechanism.
The legal fallout escalated when New York Times journalist Julia Angwin filed a class‑action lawsuit alleging that Grammarly used her name and those of dozens of other writers without consent, contravening a century‑old New York law prohibiting commercial use of a person’s name. The suit seeks monetary damages and an injunction to prevent further unauthorized use, positioning the case as a potential landmark for AI‑driven content platforms. If successful, the ruling could force companies to secure explicit licensing agreements before training models on or attributing content to identifiable individuals, reshaping how AI developers source and present expert knowledge.
Beyond the courtroom, the controversy underscores a broader industry reckoning about transparency, consent, and the ethical deployment of generative AI. Companies are now pressured to build mechanisms that give creators control over how their identities are represented, balancing innovation with respect for intellectual property and personal branding. Grammarly’s promise to redesign the feature with opt‑in author control reflects an emerging best practice: integrating consent workflows directly into AI product development to maintain user trust and avoid costly legal challenges.
Comments
Want to join the conversation?
Loading comments...