BotBeat
...
← Back

> ▌

AnthropicAnthropic
INDUSTRY REPORTAnthropic2026-03-06

Chardet License Switch Sparks Debate: Can AI Rewrites Bypass Copyleft, Asks Bruce Perens

Key Takeaways

  • ▸Chardet maintainer Dan Blanchard used Anthropic's Claude AI to rewrite the library, switching from LGPL to MIT license and claiming less than 1.3% code similarity
  • ▸Original creator Mark Pilgrim contests the license change, arguing that exposure to original LGPL code means derivative works must maintain the same license
  • ▸The rewrite achieves 48x performance improvement and aims to enable chardet's inclusion in Python's standard library, which LGPL licensing previously prevented
Source:
Hacker Newshttps://www.theregister.com/2026/03/06/ai_kills_software_licensing/↗

Summary

A controversy erupted in the open source community after Dan Blanchard, maintainer of the Python library chardet, switched from LGPL to MIT licensing for version 7.0, claiming Anthropic's Claude AI created a "clean room" rewrite. The move challenges fundamental assumptions about software licensing, as Blanchard argues the AI-generated code shares less than 1.3% similarity with previous versions. Original creator Mark Pilgrim contested the change, asserting that LGPL terms require derivative works to maintain the same license regardless of rewriting methods, and that prior exposure to licensed code negates any "clean room" claim.

Blanchard defended his decision by citing chardet's 130 million monthly downloads and his decade-long goal to include it in Python's standard library, which the LGPL license prevented. Using Claude, he achieved a complete rewrite in five days that delivers 48x faster performance. He maintains this constitutes a legitimate clean room implementation since no original code structure was preserved. However, critics question whether Claude's training on the original codebase undermines this claim, raising unresolved questions about AI-generated code and copyright.

The dispute has attracted attention from open source advocate Bruce Perens, who argues this case demonstrates how AI will fundamentally disrupt software licensing frameworks. The incident highlights a critical gap in intellectual property law as it applies to AI-assisted development. With no clear legal precedent for whether AI-generated rewrites qualify as derivative works under copyleft licenses, the case may set important standards for the future of open source software development and commercial licensing practices.

  • Bruce Perens suggests this case demonstrates how AI will fundamentally challenge existing software licensing frameworks, with no clear legal precedent yet established
  • The dispute raises unresolved questions about whether AI training on licensed code and subsequent code generation constitutes a legitimate "clean room" implementation

Editorial Opinion

This controversy exposes a critical vulnerability in decades-old open source licensing frameworks that never anticipated AI-generated code. If AI rewrites can legitimately bypass copyleft requirements, it fundamentally undermines the GPL/LGPL model that has protected collaborative software development for generations. The legal ambiguity around whether Claude's training on the original code—and the maintainer's prior intimate knowledge of it—negates "clean room" status could reshape how we think about software attribution and derivative works. This isn't just a technical dispute; it's a potential inflection point that may require new legal frameworks specifically designed for the AI era.

Large Language Models (LLMs)Generative AIMarket TrendsRegulation & PolicyOpen Source

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us