Critical Security Gap: AI Agents Given Unrestricted Root Access Across Popular Platforms
Key Takeaways
- ▸All major AI platforms (Claude, ChatGPT, Cursor) use all-or-nothing permission models with no granular access control for agent tools
- ▸Security scan of 1,808 MCP servers found 66% had security vulnerabilities, with 30 CVEs in 60 days and widespread malware in published skills
- ▸AI agents are being granted system-level access comparable to root permissions, similar to pre-IAM cloud infrastructure
Summary
A security researcher at Aerostack has identified a critical vulnerability in how AI agents are granted permissions across Model Context Protocol (MCP) servers and AI platforms. When connecting tools like PostgreSQL databases, GitHub repositories, and Slack workspaces to AI agents, users cannot granularly restrict access—instead receiving an all-or-nothing permission model that grants destructive capabilities like DELETE, DROP TABLE, remove_repository, and delete_channel regardless of intended use case. The researcher tested major AI platforms including Claude, Cursor, and ChatGPT and found this pattern is universal.
A scan of 1,808 MCP servers revealed alarming security metrics: 66% contained security findings, with 30 CVEs discovered in 60 days. The malware problem is particularly severe, with 76 published skills containing malicious code and 5 of the 7 most-downloaded skills identified as malware. The researcher compares the current landscape to early cloud infrastructure before identity and access management (IAM) systems were implemented, arguing the industry is repeating historical mistakes by deploying powerful automation without proper permission controls.
Aerostack has attempted to address this gap by building per-tool permission controls into their gateway, allowing individual toggles for each capability with destructive operations blocked by default and enforced at the proxy layer. However, the broader ecosystem remains vulnerable as long as platform-level granular permission models are absent.
- Current architecture allows agents to execute destructive operations (DELETE, DROP, remove_user, delete_channel) even when only read access was intended
Editorial Opinion
This report exposes a fundamental architectural flaw in how AI agent capabilities are being deployed at scale. The comparison to pre-IAM cloud computing is apt and sobering—the industry appears to be repeating a well-documented security failure by prioritizing ease-of-use over permission granularity. Until major AI platforms implement proper access control frameworks, organizations deploying AI agents face unacceptable operational risk, and the ecosystem's reliance on community-built tools with high malware rates suggests we're in an early, unsafe phase of agent deployment.

