Short: Grand Theft Grammarly W/ Julia Angwin & Peter Romer-Friedman

Computer Says Maybe

Short: Grand Theft Grammarly W/ Julia Angwin & Peter Romer-Friedman

Computer Says MaybeMar 25, 2026

Why It Matters

The case highlights how AI can unlawfully appropriate a person’s name and professional reputation, raising urgent questions about digital rights and the adequacy of existing publicity laws. As AI tools proliferate, this lawsuit could set a precedent that curtails unauthorized AI impersonations across industries, protecting both public figures and everyday experts.

Key Takeaways

  • Grammarly used AI to impersonate experts without consent
  • Julia Angwin filed class-action citing New York right of publicity
  • Lawsuits highlight gaps in federal deepfake and publicity regulations
  • Poor AI edits risk reputational damage and legal liability
  • Case may set precedent for AI use of personal identities

Pulse Analysis

Grammarly’s new "expert review" feature sparked a firestorm when the AI began generating edits under the names of high‑profile journalists, authors and academics, including Julia Angwin. The tool presented suggestions as if the real experts were reviewing the text, even fabricating anecdotes for investigative pieces. Angwin first learned of the impersonation through Casey Newton’s Platformer newsletter and quickly realized the AI was violating her name and reputation, prompting her to join a class‑action lawsuit that targets the company now operating under the Superhuman brand.

Attorney Peter Romer‑Friedman explained that the case rests on well‑established state right‑of‑publicity statutes, particularly New York and California law, which protect individuals from unauthorized commercial use of their identity. While a federal right of publicity does not yet exist, the lawsuit leverages these decades‑old statutes to argue that Grammarly’s AI‑driven impersonation constitutes illegal exploitation. The complaint also underscores the inadequacy of current deep‑fake legislation, which focuses mainly on visual or audio likenesses, leaving “deepfakes of the mind” unaddressed.

The broader tech community is watching closely, as a ruling could force AI developers to obtain explicit consent before using any personal brand or expertise in their products. This precedent may ripple into other AI applications, from character chatbots to content‑generation platforms, compelling firms to reassess consent mechanisms and potentially prompting new federal legislation. Ultimately, the case highlights the intersection of privacy law, AI accountability, and consumer protection, signaling that unchecked AI impersonation will face increasing legal scrutiny.

Episode Description

Grammarly launched a feature that no one wanted and now they’re getting sued. They used the names of writers, journalists, and editors to pretend that AI versions of those people were making writing suggestions via the application. None of these ‘expert reviewers’ had any idea. Grammarly pissed off the wrong journalist.

And now Julia Angwin is suing them.

More like this: The Toxic Relationship Between AI & Journalism w/ Nic Dawes

In this episode Julia (and her lawyer Peter) discuss what happened with Grammarly, why she’s suing, and how neither of them can believe that this tool made it through their legal team and into the public realm.

Please email info@prf-law.com for more info, or if you would like your name to be searched in the list of experts that Grammarly used for their tool.

Further reading & resources:

Julia’s op ed in the New York Times

Pre-order Julia’s new book On Courage: How to be a Dissident in an Age of Fear

Check out The Markup, founded by Julia

Grammarly pulls AI author-impersonation tool after backlash — BBC 12th March 2026

Shishir Mehrotra’s (CEO of Grammarly) apology on LinkedIn

Grammarly Is Offering ‘Expert’ AI Reviews From Your Favorite Authors—Dead or Alive — Wired 4th March 2026

Grammarly is using our identities without permission — The Verge 6th March 2026

Grammarly turned me into an AI editor against my will and I hate it — Casey Newton, Platformer 9th March 2026

Details of the case, from PRF Law, Julia’s representative firm

Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!

Computer Says Maybe is produced by Georgia Iacovou, Kushal Dev, Marion Wellington, Sarah Myles, Van Newman, and Zoe Trout

Show Notes

Comments

Want to join the conversation?

Loading comments...