BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-03-12

OpenClaw and Personal AI Agents Face Critical Security Vulnerabilities, Researchers Warn

Key Takeaways

  • ▸OpenClaw can execute shell commands, read/write files, and run scripts with high-level privileges, creating severe security risks if misconfigured or infected with malicious skills
  • ▸Cisco researchers demonstrated that 26% of 31,000 analyzed agent skills contained vulnerabilities, with a test skill successfully exfiltrating data and conducting prompt injection attacks
  • ▸Security in OpenClaw is optional rather than built-in, and the product documentation acknowledges there is no 'perfectly secure' setup
Source:
Hacker Newshttps://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare↗

Summary

Security researchers from Cisco have identified severe security risks in OpenClaw, a viral open-source personal AI assistant that executes tasks on users' behalf through messaging applications like WhatsApp and iMessage. While OpenClaw's capabilities—including persistent memory, task automation, browser control, and integration with third-party skills—represent a breakthrough in personal AI assistants, the tool's architecture creates substantial attack vectors. The researchers demonstrated that a malicious skill called "What Would Elon Do?" exposed nine critical vulnerabilities in OpenClaw, including active data exfiltration, prompt injection attacks, and the ability to execute arbitrary shell commands with high-level privileges.

Cisco's AI Threat and Security Research team released an open-source Skill Scanner tool designed to identify vulnerabilities in AI agent skills, revealing that 26% of 31,000 analyzed agent skills contained at least one vulnerability. The core problem lies in OpenClaw's architecture: security is optional rather than built-in, and granting AI agents unlimited local access to data, files, and system commands creates significant risks if configurations are misconfigured or malicious skills are downloaded. The researchers emphasize that local execution alone does not guarantee safety, as compromised credentials and API keys can be leaked through prompt injection or unsecured endpoints.

  • Integration with messaging applications like WhatsApp and iMessage extends attack surfaces, allowing threat actors to craft malicious prompts that cause unintended behavior
  • Cisco released an open-source Skill Scanner tool to help identify vulnerabilities in agent skills before deployment

Editorial Opinion

OpenClaw represents an impressive technical achievement in personal AI assistants, but the security research reveals a cautionary tale about deploying powerful autonomous agents without security-first design principles. The 26% vulnerability rate in analyzed skills suggests the AI agent ecosystem is moving faster than security tooling can keep pace. While Cisco's Skill Scanner is a valuable contribution, the fundamental architecture of OpenClaw—where security is optional—may require substantial redesign before such tools can be safely deployed at scale.

AI AgentsCybersecurityAI Safety & AlignmentPrivacy & Data

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us