Linux Kernel Establishes First Formal Policy on AI-Assisted Code Contributions
Key Takeaways
- ▸Only humans can sign off on code using the Developer Certificate of Origin (DCO); AI agents cannot add Signed-off-by tags
- ▸All AI-assisted contributions must include an "Assisted-by" tag identifying the model, agent, and tools used for transparency and review purposes
- ▸Human developers bear full legal and technical responsibility for AI-generated code, including reviewing it for bugs, security flaws, and license compliance
Summary
After months of debate, Linus Torvalds and the Linux kernel maintainers have officially codified the project's first formal policy governing AI-assisted code contributions. The new guidelines establish three core principles: AI agents cannot add Signed-off-by tags (only humans can certify code), mandatory "Assisted-by" attribution must identify the AI model and tools used, and human developers bear full responsibility for reviewing, testing, and ensuring compliance of AI-generated code. The policy was prompted by controversy surrounding an undisclosed AI-generated patch submitted by NVIDIA engineer Sasha Levin, which sparked broader discussion about transparency and accountability in kernel development.
The Assisted-by tag serves as both a transparency mechanism and a review flag, enabling maintainers to scrutinize AI-assisted patches appropriately without stigmatizing the practice. The Linux maintainers ultimately chose "Assisted-by" over alternatives like "Generated-by" or "Co-developed-by" to better reflect that AI functions as a tool rather than a co-author. This pragmatic approach acknowledges that AI coding assistants have become genuinely useful for kernel development while maintaining the rigorous quality standards and legal accountability that are foundational to the Linux project.
- The policy reflects a pragmatic balance between leveraging modern AI development tools and maintaining the Linux kernel's rigorous quality standards
Editorial Opinion
The Linux kernel's new AI policy strikes a pragmatic balance that other open-source projects and organizations should carefully study. By requiring transparency through Assisted-by tags while placing full accountability on human developers, the policy acknowledges that AI coding tools are now genuinely useful without creating dangerous legal ambiguity about responsibility. This approach—treating AI as a powerful tool rather than a co-author—may become a model for how mature software projects can safely integrate AI without compromising quality or dodging accountability.


