Grammarly Sued Over AI Tool That Impersonated Authors Without Consent
Investigative journalist Julia Angwin has filed a class action lawsuit against Superhuman (Grammarly's parent company) alleging that the 'Expert Review' tool generated editing suggestions attributed to real authors and academics without their consent, representing a landmark case in AI rights.

Analysis
Julia Angwin's lawsuit against Grammarly marks the moment when AI impersonation moved from theoretical concern to legal liability. The 'Expert Review' tool didn't just use author names—it generated editing feedback *attributed* to them, creating a deepfake of their professional judgment without permission or compensation. Grammarly shut the feature down Wednesday, but only after the lawsuit was filed.
What makes this case significant is not just the legal theory, but the plaintiff. Angwin is a respected investigative journalist and founder of Proof News, not a celebrity author looking for a payday. She wrote in the New York Times that she "had thought of deepfakes as something that happened to politicians," but now understands they're coming for writers too. Her lawsuit isn't anti-AI; it's pro-consent.
The legal claims are straightforward: misappropriation of identity, violation of publicity rights, and unjust enrichment. Grammarly profited from the feature by selling premium subscriptions. The authors whose identities were used got nothing. Superhuman's defense will likely hinge on fair use arguments—that using author names and styles for AI training is transformative. But the lawsuit alleges something more specific: the company *attributed* editing suggestions to real people, creating a false endorsement. That's harder to defend as fair use.
For the publishing industry, this case signals that the era of moving fast and breaking things is over. AI companies can no longer assume that using author data without consent is a gray area. The Authors Guild's 'Human Authored' certification program (which just expanded this month) and this lawsuit are part of a coordinated movement toward author agency. The question is whether courts will enforce it. If they do, every AI tool that trains on published text without consent will face similar exposure.