BotBeat
...
← Back

> ▌

Servo (Linux Foundation)Servo (Linux Foundation)
POLICY & REGULATIONServo (Linux Foundation)2026-04-14

Linux Kernel Establishes First Formal Policy on AI-Assisted Code Contributions

Key Takeaways

  • ▸Only humans can sign off on code using the Developer Certificate of Origin (DCO); AI agents cannot add Signed-off-by tags
  • ▸All AI-assisted contributions must include an "Assisted-by" tag identifying the model, agent, and tools used for transparency and review purposes
  • ▸Human developers bear full legal and technical responsibility for AI-generated code, including reviewing it for bugs, security flaws, and license compliance
Source:
Hacker Newshttps://www.zdnet.com/article/linus-torvalds-and-maintainers-finalize-ai-policy-for-linux-kernel-developers/↗

Summary

After months of debate, Linus Torvalds and the Linux kernel maintainers have officially codified the project's first formal policy governing AI-assisted code contributions. The new guidelines establish three core principles: AI agents cannot add Signed-off-by tags (only humans can certify code), mandatory "Assisted-by" attribution must identify the AI model and tools used, and human developers bear full responsibility for reviewing, testing, and ensuring compliance of AI-generated code. The policy was prompted by controversy surrounding an undisclosed AI-generated patch submitted by NVIDIA engineer Sasha Levin, which sparked broader discussion about transparency and accountability in kernel development.

The Assisted-by tag serves as both a transparency mechanism and a review flag, enabling maintainers to scrutinize AI-assisted patches appropriately without stigmatizing the practice. The Linux maintainers ultimately chose "Assisted-by" over alternatives like "Generated-by" or "Co-developed-by" to better reflect that AI functions as a tool rather than a co-author. This pragmatic approach acknowledges that AI coding assistants have become genuinely useful for kernel development while maintaining the rigorous quality standards and legal accountability that are foundational to the Linux project.

  • The policy reflects a pragmatic balance between leveraging modern AI development tools and maintaining the Linux kernel's rigorous quality standards

Editorial Opinion

The Linux kernel's new AI policy strikes a pragmatic balance that other open-source projects and organizations should carefully study. By requiring transparency through Assisted-by tags while placing full accountability on human developers, the policy acknowledges that AI coding tools are now genuinely useful without creating dangerous legal ambiguity about responsibility. This approach—treating AI as a powerful tool rather than a co-author—may become a model for how mature software projects can safely integrate AI without compromising quality or dodging accountability.

Regulation & PolicyEthics & BiasOpen Source

More from Servo (Linux Foundation)

Servo (Linux Foundation)Servo (Linux Foundation)
UPDATE

Linux 7.0 Released: Rust Support Officialized as Torvalds Highlights AI's Growing Role in Bug Detection

2026-04-14
Servo (Linux Foundation)Servo (Linux Foundation)
UPDATE

Linux 7.0 Adds Support for New Keys Designed for AI Agent Interactions on Upcoming Laptops

2026-04-09
Servo (Linux Foundation)Servo (Linux Foundation)
INDUSTRY REPORT

Linux Kernel Maintainers Report Sudden Shift: AI Bug Reports Jump from 'Junk to Legit' Overnight

2026-03-27

Comments

Suggested

AnthropicAnthropic
PRODUCT LAUNCH

Finance Leaders Sound Alarm as Anthropic's Claude Mythos Expands to UK Banks

2026-04-17
AnthropicAnthropic
RESEARCH

Study: Leading LLMs Fail in 80% of Early Differential Diagnosis Cases, Raising Patient Safety Concerns

2026-04-17
N/AN/A
INDUSTRY REPORT

Investigation: AI-Generated Deepfake Nudes Affecting Nearly 90 Schools Across 28 Countries

2026-04-17
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us