BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-04-25

AI Copyright Disputes Escalate as Claude Shown to Mimic Author Voices

Key Takeaways

  • ▸Anthropic's $1.5 billion Bartz settlement acknowledges unauthorized training of Claude on copyrighted works
  • ▸AI systems can be trained to mimic distinctive authorial voice and style, not just subject matter
  • ▸Grammarly faces new litigation alleging identity appropriation through its 'Expert Review' tool
Source:
Hacker Newshttps://theconversation.com/thousands-of-ai-written-edited-or-polished-books-are-being-sold-an-eerie-echo-of-orwells-novel-writing-machines-276008↗

Summary

In 2025, Anthropic agreed to a landmark $1.5 billion settlement in Bartz v. Anthropic, acknowledging that its Claude chatbot was trained on copyrighted works. While the settlement initially appeared to focus on copyright infringement of content, new evidence suggests the issue runs much deeper: AI systems like Claude can be trained not only to replicate authors' subject matter but to mimic their distinctive voices and literary styles.

Tests of Claude reveal the chatbot can reproduce characteristic prose patterns and authorial techniques, raising troubling questions about the unauthorized appropriation of writer identity. A political historian tested Claude by asking it to write essays in her own style and in the style of George Orwell, with surprisingly competent results that demonstrate the chatbot's ability to learn and replicate distinctive literary voices.

These concerns are now amplified by a fresh class-action lawsuit filed by journalist Julia Angwin against Grammarly, alleging the company misappropriated writers' identities to build its 'Expert Review' AI tool. Together, these cases illuminate a fundamental threat to human authorship: AI systems that don't merely plagiarize words but appropriate the very essence of an author's voice and identity.

  • The convergence of these cases reveals copyright disputes center on protecting author identity and authorial voice, not just content
Generative AICreative IndustriesRegulation & PolicyEthics & BiasPrivacy & Data

More from Anthropic

AnthropicAnthropic
INDUSTRY REPORT

AI Companies Begin Monetization Push as Investor Returns Come Due

2026-04-24
AnthropicAnthropic
UPDATE

Anthropic Restricts Opus Model Access to Pro Plans With Extra Usage Fee

2026-04-24
AnthropicAnthropic
UPDATE

Anthropic CPO Resigns from Figma Board as AI Lab Expands Into Design Tools

2026-04-24

Comments

Suggested

OpenAIOpenAI
INDUSTRY REPORT

The Great Coding Model Shakeup: GPT-5.5 Challenges Anthropic's Dominance, But Benchmarks Tell Conflicting Stories

2026-04-25
GCC (GNU Compiler Collection)GCC (GNU Compiler Collection)
POLICY & REGULATION

GCC Establishes Working Group to Define AI/LLM Policy

2026-04-25
Google / AlphabetGoogle / Alphabet
FUNDING & BUSINESS

Google Assembles 'Strike Team' Led by Sergey Brin to Challenge Anthropic's Code Generation Dominance

2026-04-25
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us