Research Reveals Finetuning Can Activate Verbatim Recall of Copyrighted Books in LLMs
Key Takeaways
- ▸Finetuning LLMs on copyrighted books activates verbatim recall capabilities that may have been suppressed through prior alignment training
- ▸Both cross-author and within-author finetuning scenarios demonstrate significant memorization of copyrighted content
- ▸The research suggests current alignment approaches may be incomplete or vulnerable to being circumvented through specific training procedures
Summary
A new research paper titled "Alignment Whack-a-Mole: Finetuning Activates Verbatim Recall of Copyrighted Books in Large Language Models" demonstrates that finetuning large language models on copyrighted texts can lead to verbatim memorization and recall of those books. The study, conducted by researchers Liu, Mireshghallah, Ginsburg, and Chakrabarty, tested two scenarios: cross-author finetuning (training on one author's works then testing on another's) and within-author finetuning (training on a subset of an author's books then testing on held-out works). The researchers measured the fraction of words in test books that could be extracted verbatim from model generations, identifying contiguous matching spans of five or more words across 100 sampled outputs. This finding raises significant concerns about the persistence of copyright-protected material in language models even after alignment training efforts.
- The study highlights ongoing tensions between model training on published works and copyright protection
Editorial Opinion
This research exposes a critical vulnerability in the copyright safeguards of large language models—that alignment measures designed to prevent content reproduction can be circumvented through finetuning. The finding that models can be made to regurgitate entire passages from copyrighted books is deeply troubling for copyright holders and underscores the inadequacy of current technical and policy solutions. As publishers and authors continue legal battles over AI training data, this work provides empirical evidence that procedural controls alone may be insufficient, potentially requiring stronger upstream restrictions on which texts can be used for model training.


