Prempti: Open-Source Guardrails and Observability Tool for AI Coding Agents
Key Takeaways
- ▸Prempti provides first structured runtime security and observability layer for AI coding agents, filling a critical gap in agent transparency
- ▸The tool uses Falco's proven rule engine adapted for coding agent context, with fields like tool.name, tool.input_command, tool.file_path, and agent.cwd
- ▸Monitor mode enables safe deployment and policy tuning before enabling enforcement, helping teams understand actual agent behavior before setting restrictions
Summary
Sysdig has introduced Prempti, an experimental project that brings runtime security and observability to AI coding agents like Claude Code. The tool uses Falco's rule engine to intercept and evaluate tool calls (file reads/writes, shell commands, network requests) before they execute, allowing developers to monitor and enforce security policies on agent behavior. Prempti runs as a lightweight user-space service without requiring root access, kernel modules, or containers, and operates in two modes: Monitor (observation only) and Guardrails (enforcement with deny/ask/allow verdicts).
The key innovation is real-time visibility into what coding agents actually do on a machine. Agents typically operate with user-level permissions and access to credentials, SSH keys, and other sensitive resources, yet most developers have no structured visibility into their runtime actions. Prempti addresses this by intercepting tool calls at the user-space level and evaluating them against Falco rules before execution, enabling detection and prevention of unauthorized or dangerous actions. Developers can use familiar Falco rule syntax to define security policies specific to their coding agent workflows, with LLM-friendly output messages allowing agents to receive structured feedback about blocked or restricted actions.
- Prempti requires no root access, kernel modules, or containers—it runs as a lightweight user-space service compatible with existing development workflows
Editorial Opinion
Prempti addresses a critical blind spot in the growing adoption of AI coding agents. As developers increasingly trust agents with terminal access and file system permissions, runtime visibility and enforcement have moved from nice-to-have to essential. The choice to leverage Falco—a battle-tested runtime security engine with a mature rule ecosystem—is smart, and the LLM-aware output design shows thoughtful engineering for the agent era. This project signals an important shift: AI agent governance must move beyond sandboxing discussions into practical, transparent policy enforcement that developers can actually understand and tune.



