BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-04-15

Claude Code's Permission System Relies on Deterministic Code, Not AI—Here's Why

Key Takeaways

  • ▸Anthropic deliberately excluded the LLM from critical permission decisions, instead implementing a multi-layered deterministic rule system with 23 independent security validators for bash alone.
  • ▸The bash tool's permission check includes a 6-stage pipeline that pre-computes four different command representations to catch evasion attempts like quote stripping, command substitution, and unicode whitespace injection.
  • ▸Certain permission checks are "bypass-immune" and hardcoded to always prompt users—writes to .git/, .claude/, .vscode/, and shell config files cannot be overridden by any mode or setting.
Source:
Hacker Newshttps://blog.raed.dev/posts/claude_code_permissions/↗

Summary

A technical analysis of Claude Code's source code reveals a striking architectural choice: while the system delegates most tasks to LLM calls—including tool selection, code generation, and context management—permission decisions are handled entirely through deterministic, rule-based code. The permissions pipeline uses glob pattern matching, regex validators, hardcoded path checks, and multi-stage security validators, with no model inference involved in the core approval logic. This design reflects Anthropic's apparent lack of trust in relying on the LLM itself to make critical security decisions. The only exception is an "auto mode" feature that uses an LLM classifier as a fallback when the deterministic pipeline reaches an "ask" state, but this feature is heavily gated, fails-closed on errors, and was initially kept internal.

  • The LLM participates in permissions only through an optional "auto mode" classifier that serves as a fallback after deterministic checks fail, with multiple fail-safe mechanisms including error-triggered denials and fallback to human prompting.
  • The architectural split reflects a trust boundary: Anthropic trusts Claude with generation and reasoning tasks but designed the permission system explicitly to not depend on the model's judgment.

Editorial Opinion

Claude Code's permission architecture reveals a pragmatic security philosophy: even when building a product around a powerful LLM, some decisions are too critical to delegate to the model itself. The multi-layered deterministic approach with fail-closed defaults is robust, but it also highlights the ongoing tension in AI systems between capability and safety. Whether this level of deterministic gatekeeping will prove sufficient as autonomous AI agents become more sophisticated remains an open question.

Generative AIAI AgentsAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
PARTNERSHIP

White House Pushes US Agencies to Adopt Anthropic's AI Technology

2026-04-17
AnthropicAnthropic
RESEARCH

AI Safety Convergence: Three Major Players Deploy Agent Governance Systems Within Weeks

2026-04-17
AnthropicAnthropic
PRODUCT LAUNCH

Finance Leaders Sound Alarm as Anthropic's Claude Mythos Expands to UK Banks

2026-04-17

Comments

Suggested

OpenAIOpenAI
RESEARCH

OpenAI's GPT-5.4 Pro Solves Longstanding Erdős Math Problem, Reveals Novel Mathematical Connections

2026-04-17
AnthropicAnthropic
RESEARCH

AI Safety Convergence: Three Major Players Deploy Agent Governance Systems Within Weeks

2026-04-17
CloudflareCloudflare
UPDATE

Cloudflare Enables AI-Generated Apps to Have Persistent Storage with Durable Objects in Dynamic Workers

2026-04-17
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us