Python 'Chardet' Package Maintainers Attempt License Change via LLM Rewrite, Sparking Legal Debate
Key Takeaways
- ▸Chardet v7 maintainers used Claude LLM to rewrite LGPL-licensed code and relicense it as MIT, sparking controversy over whether this constitutes legitimate reimplementation
- ▸Recent court decisions indicate LLM output may not be copyrightable due to insufficient human authorship, potentially undermining the maintainers' legal position
- ▸The case raises existential questions for open source: could AI-based "license laundering" allow strip-mining of GPL commons without community benefit?
Summary
Maintainers of the popular Python package 'chardet' have released version 7, claiming it represents a "ground-up rewrite" that justifies changing the license from LGPL to MIT. However, the rewrite was produced by feeding the existing copyrighted codebase through Anthropic's Claude LLM, rather than through traditional clean-room reimplementation. The maintainers argue this constitutes a new work of authorship eligible for relicensing, citing the Oracle v. Google decision on API fair use.
The move has triggered heated controversy in the open source community, with critics arguing this represents an attempt to "launder" GPL-licensed code through an AI system to escape copyleft obligations. Legal experts note that recent court decisions suggest LLM output may not be copyrightable since it lacks sufficient human creative expression. The case raises fundamental questions about whether running copyrighted code through an LLM can strip it of its original licensing terms.
The controversy highlights broader concerns about AI's impact on open source software. If such license laundering becomes accepted practice, it could allow commercial interests to appropriate community-developed GPL software, relicense it under permissive terms, and monetize it without contributing back to the commons. The maintainers' approach differs significantly from clean-room reimplementation, where developers work from specifications without access to original source code—a practice courts have traditionally recognized as legitimate.
Discussion in GitHub issue #327 has drawn attention from legal scholars and open source advocates debating the implications. The case may ultimately require judicial resolution to clarify whether LLM-mediated code transformation can bypass software licensing obligations, with potentially far-reaching consequences for the open source ecosystem and the enforceability of copyleft licenses in the age of generative AI.
- Unlike traditional clean-room reimplementation, the maintainers had direct access to original copyrighted code when using the LLM, weakening fair use arguments

