BotBeat
...
← Back

> ▌

OpenAIOpenAI
PRODUCT LAUNCHOpenAI2026-05-04

OpenAI Launches GPT-5.5-Cyber with Restricted Access, Reversing Recent Criticism of Anthropic

Key Takeaways

  • ▸OpenAI is restricting GPT-5.5-Cyber to vetted organizations despite Sam Altman's recent public criticism of Anthropic for using the same limited-access strategy
  • ▸The model has been independently validated as among the strongest on cybersecurity tasks, raising both security capabilities and misuse concerns
  • ▸The reversal exposes the gap between public AI ethics rhetoric and commercial gatekeeping practices
Source:
Hacker Newshttps://www.theregister.com/2026/05/01/openai_locks_gpt55cyber_behind_velvet/↗

Summary

OpenAI has announced a limited release of its new GPT-5.5-Cyber model, a tool designed to identify and exploit system vulnerabilities, restricting access to a vetted circle of "cyber defenders" working on critical infrastructure security. The rollout begins within days, marking a sharp reversal from OpenAI's public criticism of Anthropic just weeks earlier for employing identical gatekeeping tactics with Claude Mythos. CEO Sam Altman had derided exclusive access strategies on a recent podcast, likening them to "selling fear" and comparing the approach to profiting from scarcity—tactics OpenAI is now employing.

The timing and irony have drawn attention within the AI industry, highlighting tensions between making powerful capabilities broadly available versus limiting access to manage security risks. GPT-5.5-Cyber is designed for penetration testing, bug discovery, malware analysis, and systems exploitation—capabilities that carry legitimate defensive value but also inherent misuse potential. Independent validation from the UK's AI Security Institute described the model as "one of the strongest models we have tested on our cyber tasks," but underscored the double-edged nature of such powerful tools.

  • Access will initially be limited to critical infrastructure defenders, with broader rollout plans remaining unclear

Editorial Opinion

OpenAI's about-face on restricted AI access reveals an uncomfortable disconnect between public principles and commercial pragmatism. While controlled releases of powerful cybersecurity tools may serve legitimate security interests, the sharp reversal from Altman's recent, pointed critique of Anthropic deserves scrutiny. The real issue isn't whether limited access is defensible—it may be necessary for sensitive capabilities—but whether companies should publicly vilify competitors for practices they immediately adopt themselves. This moment exposes the performative nature of much AI ethics discourse.

Generative AICybersecurityAI Safety & AlignmentProduct Launch

More from OpenAI

OpenAIOpenAI
RESEARCH

Researchers Unveil How GPT-5.5 and Opus 4.7 Struggle With Novel Problems—And Open-Source the Tools to Prove It

2026-05-04
OpenAIOpenAI
RESEARCH

Warmth-Tuned AI Models More Prone to Errors, Oxford Study Finds

2026-05-03
OpenAIOpenAI
POLICY & REGULATION

Dark-Money Campaign Funded by AI Industry Figures Pays Influencers to Frame Chinese AI as a Threat

2026-05-03

Comments

Suggested

Character.AICharacter.AI
POLICY & REGULATION

Senate Judiciary Committee Advances GUARD Act to Regulate AI Chatbots and Protect Minors

2026-05-04
Five Eyes Alliance (CISA, NSA, NCSC-UK, ACSC, Cyber Centre, NCSC-NZ)Five Eyes Alliance (CISA, NSA, NCSC-UK, ACSC, Cyber Centre, NCSC-NZ)
POLICY & REGULATION

Five Eyes Agencies Warn Organizations to Slow Rollouts of Agentic AI Due to Security Risks

2026-05-04
RackspaceRackspace
PRODUCT LAUNCH

Rackspace Launches GPU-as-a-Service with Spot Instance Pricing in San Jose Expansion

2026-05-04
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us