BotBeat
...
← Back

> ▌

Advance LocalAdvance Local
POLICY & REGULATIONAdvance Local2026-04-15

Cal Abandons Open Source Over AI Security Risks, Citing Vulnerability to AI-Powered Attacks

Key Takeaways

  • ▸Cal is moving from AGPL open-source licensing to proprietary licensing specifically due to AI security threats, not traditional business motivations
  • ▸Advanced AI models like Claude Opus and Mythos can now systematically scan open-source code to identify vulnerabilities at scale, fundamentally changing the security calculus for open projects
  • ▸Open-source applications are estimated to be 5-10x easier to exploit than closed-source alternatives when AI-powered attackers have access to full codebases
Source:
Hacker Newshttps://www.zdnet.com/article/ai-security-worries-force-company-to-abandon-open-source/↗

Summary

Cal, a scheduling platform and formerly the largest Next.js open-source project, has made the controversial decision to move away from open source and shift to a proprietary license model. The decision was driven by security concerns related to advanced AI models like Claude Opus and Anthropic's Mythos, which can rapidly identify vulnerabilities in publicly available code. Cal's CEO Bailey Pumfleet described open-source code as "like handing out the blueprint to a bank vault," arguing that AI-powered attackers can now systematically exploit exposed codebases at scale.

While the move was partially motivated by Mythos's recent demonstration of breaking into OpenBSD, Cal clarifies that even earlier-generation AI models pose sufficient risk. The company cited research indicating that open-source applications are 5-10 times easier to exploit than closed-source alternatives, and argued that protecting sensitive booking data for users took priority over maintaining its open-source commitment. Cal has attempted to balance this decision by releasing Cal.diy, a fully open-source version for hobbyists, while keeping the commercial product proprietary.

Pumfleet emphasized that the decision is not ideological but pragmatic, stating that Cal "would open source again" if the security landscape shifted. The move highlights a potential inflection point for the open-source software ecosystem, as AI-assisted vulnerability discovery may force other smaller companies without dedicated security resources to make similar choices.

  • Cal released Cal.diy as a compromise—a fully open-source hobbyist version while keeping its commercial product proprietary to protect sensitive user data
  • This decision may represent a broader trend, with smaller companies lacking dedicated security teams potentially forced to choose between open-source principles and data protection

Editorial Opinion

Cal's decision marks a sobering watershed moment for the open-source community, revealing a fundamental tension between transparency and security in the age of AI-powered vulnerability discovery. While the company's pragmatism is understandable—prioritizing customer data protection over ideological commitments—it raises urgent questions about whether open-source software can remain viable for security-sensitive applications as AI tools democratize exploit development. The broader software industry should take note: if even companies that built their reputation on open source feel compelled to close their code, we may be witnessing the beginning of a significant structural shift in how critical software is developed and maintained.

CybersecurityRegulation & PolicyAI Safety & AlignmentOpen Source

More from Advance Local

Advance LocalAdvance Local
PARTNERSHIP

Cleveland.com Launches AI Rewrite Desk to Free Reporters for More Field Work

2026-02-26

Comments

Suggested

AnthropicAnthropic
RESEARCH

AI Safety Convergence: Three Major Players Deploy Agent Governance Systems Within Weeks

2026-04-17
OpenAIOpenAI
RESEARCH

When Should AI Step Aside?: Teaching Agents When Humans Want to Intervene

2026-04-17
AnthropicAnthropic
PRODUCT LAUNCH

Finance Leaders Sound Alarm as Anthropic's Claude Mythos Expands to UK Banks

2026-04-17
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us