BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-03-26

Critical Security Gap: AI Agents Given Unrestricted Root Access Across Popular Platforms

Key Takeaways

  • ▸All major AI platforms (Claude, ChatGPT, Cursor) use all-or-nothing permission models with no granular access control for agent tools
  • ▸Security scan of 1,808 MCP servers found 66% had security vulnerabilities, with 30 CVEs in 60 days and widespread malware in published skills
  • ▸AI agents are being granted system-level access comparable to root permissions, similar to pre-IAM cloud infrastructure
Source:
Hacker Newshttps://news.ycombinator.com/item?id=47530428↗

Summary

A security researcher at Aerostack has identified a critical vulnerability in how AI agents are granted permissions across Model Context Protocol (MCP) servers and AI platforms. When connecting tools like PostgreSQL databases, GitHub repositories, and Slack workspaces to AI agents, users cannot granularly restrict access—instead receiving an all-or-nothing permission model that grants destructive capabilities like DELETE, DROP TABLE, remove_repository, and delete_channel regardless of intended use case. The researcher tested major AI platforms including Claude, Cursor, and ChatGPT and found this pattern is universal.

A scan of 1,808 MCP servers revealed alarming security metrics: 66% contained security findings, with 30 CVEs discovered in 60 days. The malware problem is particularly severe, with 76 published skills containing malicious code and 5 of the 7 most-downloaded skills identified as malware. The researcher compares the current landscape to early cloud infrastructure before identity and access management (IAM) systems were implemented, arguing the industry is repeating historical mistakes by deploying powerful automation without proper permission controls.

Aerostack has attempted to address this gap by building per-tool permission controls into their gateway, allowing individual toggles for each capability with destructive operations blocked by default and enforced at the proxy layer. However, the broader ecosystem remains vulnerable as long as platform-level granular permission models are absent.

  • Current architecture allows agents to execute destructive operations (DELETE, DROP, remove_user, delete_channel) even when only read access was intended

Editorial Opinion

This report exposes a fundamental architectural flaw in how AI agent capabilities are being deployed at scale. The comparison to pre-IAM cloud computing is apt and sobering—the industry appears to be repeating a well-documented security failure by prioritizing ease-of-use over permission granularity. Until major AI platforms implement proper access control frameworks, organizations deploying AI agents face unacceptable operational risk, and the ecosystem's reliance on community-built tools with high malware rates suggests we're in an early, unsafe phase of agent deployment.

AI AgentsCybersecurityEthics & BiasAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us