The Scoop: Grammarly Apologizes for AI Tool that Mimics Writers Amid Legal Dispute

The Scoop: Grammarly Apologizes for AI Tool that Mimics Writers Amid Legal Dispute

PR Daily (Ragan)
PR Daily (Ragan)Mar 13, 2026

Why It Matters

The admission during litigation underscores heightened legal risk for AI firms that leverage personal likenesses, making consent and transparency non‑negotiable. It also threatens Grammarly’s credibility and could shape broader industry standards.

Key Takeaways

  • Grammarly disabled Expert Review after lawsuit
  • Journalist Julia Angwin filed class-action suit
  • CEO apologized, citing mishandled rollout
  • Tool mimicked real writers without consent
  • Incident raises AI ethics and trust concerns

Pulse Analysis

Grammarly’s “Expert Review” feature was marketed as a way for users to receive feedback modeled on the voices of celebrated journalists and scholars. By training the AI on publicly available writings and then attaching real names, the company blurred the line between algorithmic output and personal endorsement. The backlash erupted when journalists, most notably investigative reporter Julia Angwin, discovered their identities were being used to lend authority to AI‑generated suggestions without any consent, prompting a class‑action lawsuit and immediate shutdown of the feature.

The legal dispute raises critical questions about the ownership of a public figure’s stylistic imprint and the applicability of right‑of‑publicity laws to AI‑generated content. Courts have begun to recognize that even non‑visual likenesses can be protected when they are used for commercial gain. Grammarly’s case may set a precedent, signaling to AI developers that leveraging recognizable voices without explicit permission could invite costly litigation and regulatory scrutiny, especially as lawmakers worldwide consider stricter AI disclosure requirements.

Beyond the courtroom, the incident serves as a cautionary tale for the broader tech industry. Companies must embed consent mechanisms, transparent labeling, and robust ethical reviews into product pipelines to preserve user trust. As AI tools become more sophisticated, stakeholders—including investors, partners, and end‑users—will demand clear policies that respect individual reputations. Grammarly’s public apology and promise to rethink its approach may mitigate some reputational damage, but rebuilding credibility will require demonstrable changes in how AI systems handle personal attribution.

The Scoop: Grammarly apologizes for AI tool that mimics writers amid legal dispute

Comments

Want to join the conversation?

Loading comments...