
Grammarly Is Facing a Class Action Lawsuit Over Its AI ‘Expert Review’ Feature
Why It Matters
The case highlights emerging legal risks for AI products that exploit personal identities, potentially reshaping industry standards for consent and transparency. It signals a looming regulatory wave that could affect all firms deploying persona‑based AI services.
Key Takeaways
- •Grammarly used AI to mimic real experts without consent
- •Class action alleges over $5 million in damages
- •Feature disabled after public backlash and legal threat
- •Potential precedent for AI‑generated likeness lawsuits
Pulse Analysis
The rapid deployment of large‑language models has enabled consumer apps to offer personalized editing, tutoring, and creative assistance. Yet as these tools become more sophisticated, they increasingly blur the line between algorithmic output and human expertise. Using a well‑known author’s name to lend credibility, without permission, turns the AI into a digital deep‑fake that can mislead users and erode trust. Legal scholars warn that existing right‑of‑publicity statutes, especially in New York and California, are poised to be tested against this new class of AI‑generated likeness.
Grammarly’s now‑defunct “Expert Review” feature epitomizes the controversy. By presenting suggestions as if they came from journalists, novelists and scientists such as Stephen King and Neil deGrasse Tyson, the company monetized the reputations of hundreds of professionals. The class‑action suit filed by investigative reporter Julia Angwin alleges misappropriation of names and claims damages exceeding $5 million. The swift shutdown of the feature signals that product teams must embed consent workflows and transparent disclosures, or risk regulatory action and brand damage.
The outcome of this case could set a precedent for the entire AI‑assisted software sector. Companies developing persona‑based assistants will likely adopt stricter licensing agreements and invest in verification layers to avoid similar lawsuits. Investors are watching how firms balance innovation with legal compliance, as consumer confidence hinges on ethical AI use. Ultimately, the Grammarly episode underscores that responsible AI deployment requires not only technical excellence but also respect for individual identity rights.
Comments
Want to join the conversation?
Loading comments...