BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-03-26

Research Reveals Finetuning Can Activate Verbatim Recall of Copyrighted Books in LLMs

Key Takeaways

  • ▸Finetuning LLMs on copyrighted books activates verbatim recall capabilities that may have been suppressed through prior alignment training
  • ▸Both cross-author and within-author finetuning scenarios demonstrate significant memorization of copyrighted content
  • ▸The research suggests current alignment approaches may be incomplete or vulnerable to being circumvented through specific training procedures
Source:
Hacker Newshttps://cauchy221.github.io/Alignment-Whack-a-Mole/↗

Summary

A new research paper titled "Alignment Whack-a-Mole: Finetuning Activates Verbatim Recall of Copyrighted Books in Large Language Models" demonstrates that finetuning large language models on copyrighted texts can lead to verbatim memorization and recall of those books. The study, conducted by researchers Liu, Mireshghallah, Ginsburg, and Chakrabarty, tested two scenarios: cross-author finetuning (training on one author's works then testing on another's) and within-author finetuning (training on a subset of an author's books then testing on held-out works). The researchers measured the fraction of words in test books that could be extracted verbatim from model generations, identifying contiguous matching spans of five or more words across 100 sampled outputs. This finding raises significant concerns about the persistence of copyright-protected material in language models even after alignment training efforts.

  • The study highlights ongoing tensions between model training on published works and copyright protection

Editorial Opinion

This research exposes a critical vulnerability in the copyright safeguards of large language models—that alignment measures designed to prevent content reproduction can be circumvented through finetuning. The finding that models can be made to regurgitate entire passages from copyrighted books is deeply troubling for copyright holders and underscores the inadequacy of current technical and policy solutions. As publishers and authors continue legal battles over AI training data, this work provides empirical evidence that procedural controls alone may be insufficient, potentially requiring stronger upstream restrictions on which texts can be used for model training.

Large Language Models (LLMs)Ethics & BiasAI Safety & AlignmentPrivacy & Data

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us