BotBeat
...
← Back

> ▌

AnthropicAnthropic
PRODUCT LAUNCHAnthropic2026-05-04

Anthropic Launches Trusted Remote Execution (Rex): Open-Source Policy-Enforced Scripts for AI Agents

Key Takeaways

  • ▸Trusted Remote Execution (Rex) separates script intent from authorization policy, preventing scripts from executing beyond their granted permissions
  • ▸Designed specifically for AI agents: instead of constraining the agent itself, Rex constrains what the agent can do to the host, giving service owners full control
  • ▸Uses Cedar policy language paired with Rhai scripts; every operation (read, write, open, etc.) is evaluated against policy before execution
Source:
Hacker Newshttps://aws.amazon.com/blogs/opensource/introducing-trusted-remote-execution-policy-enforced-scripts-for-ai-agents-and-humans/↗

Summary

Anthropic has released Trusted Remote Execution (Rex), an open-source scripting runtime that enforces authorization policies for every system operation. Written in the lightweight Rhai language, Rex uses Cedar policy files to control what scripts can do on host systems, with no direct system access—all operations must be explicitly approved by policy. This addresses a critical safety challenge: when AI agents autonomously generate and execute scripts, traditional code review and approval workflows don't apply. With Rex, an agent-generated script that attempts operations beyond the policy receives an ACCESS_DENIED_EXCEPTION rather than causing unintended side effects, allowing the agent to observe, reason, and adjust. The runtime is available for Linux and macOS under the Apache 2.0 license, enabling organizations to give AI agents real operational access—reading logs, inspecting configurations, restarting services—while maintaining hard boundaries around what they can touch.

  • Open source under Apache 2.0 license and available for Linux and macOS via Cargo

Editorial Opinion

Rex represents a thoughtful architectural approach to AI agent safety that sidesteps the traditional sandbox limitations. Rather than restricting what agents can request, it enforces what hosts will permit—a powerful inversion that gives operational teams explicit control. For organizations looking to grant AI agents practical system access while maintaining safety guarantees, this policy-based authorization model offers a compelling alternative to overly restrictive sandboxes or dangerous unrestricted execution.

AI AgentsMLOps & InfrastructureAI Safety & AlignmentOpen Source

More from Anthropic

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Unleashes Computer Use: Claude 3.5 Sonnet Now Controls Your Desktop

2026-05-12
AnthropicAnthropic
PARTNERSHIP

SpaceX Backs Anthropic with Massive Data Centre Deal Amidst Musk's OpenAI Legal Battle

2026-05-12

Comments

Suggested

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
vlm-runvlm-run
OPEN SOURCE

mm-ctx: Open-Source Multimodal CLI Toolkit Brings Vision Capabilities to AI Agents

2026-05-12
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Unleashes Computer Use: Claude 3.5 Sonnet Now Controls Your Desktop

2026-05-12
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us