AI Copyright Disputes Escalate as Claude Shown to Mimic Author Voices
Key Takeaways
- ▸Anthropic's $1.5 billion Bartz settlement acknowledges unauthorized training of Claude on copyrighted works
- ▸AI systems can be trained to mimic distinctive authorial voice and style, not just subject matter
- ▸Grammarly faces new litigation alleging identity appropriation through its 'Expert Review' tool
Summary
In 2025, Anthropic agreed to a landmark $1.5 billion settlement in Bartz v. Anthropic, acknowledging that its Claude chatbot was trained on copyrighted works. While the settlement initially appeared to focus on copyright infringement of content, new evidence suggests the issue runs much deeper: AI systems like Claude can be trained not only to replicate authors' subject matter but to mimic their distinctive voices and literary styles.
Tests of Claude reveal the chatbot can reproduce characteristic prose patterns and authorial techniques, raising troubling questions about the unauthorized appropriation of writer identity. A political historian tested Claude by asking it to write essays in her own style and in the style of George Orwell, with surprisingly competent results that demonstrate the chatbot's ability to learn and replicate distinctive literary voices.
These concerns are now amplified by a fresh class-action lawsuit filed by journalist Julia Angwin against Grammarly, alleging the company misappropriated writers' identities to build its 'Expert Review' AI tool. Together, these cases illuminate a fundamental threat to human authorship: AI systems that don't merely plagiarize words but appropriate the very essence of an author's voice and identity.
- The convergence of these cases reveals copyright disputes center on protecting author identity and authorial voice, not just content



