Prompt Guard: Open-Source MITM Proxy Blocks Sensitive Data From Reaching AI APIs
Key Takeaways
- ▸Prompt Guard is an open-source MITM proxy that intercepts and inspects prompts sent to AI APIs in real time, detecting sensitive data like API keys, passwords, and PII before transmission
- ▸The tool operates in inspect-only mode by default, alerting users to credential leaks without blocking API calls, providing visibility without disrupting workflows
- ▸Lightweight and zero-dependency design allows easy deployment as a single binary with local CA certificate generation and built-in web dashboard for audit logging and monitoring
Summary
Prompt Guard is a lightweight, open-source HTTPS man-in-the-middle proxy that intercepts prompts sent to AI coding assistants and large language model APIs before they reach third-party servers. The tool sits between applications like VS Code with GitHub Copilot, ChatGPT, and Claude, scanning every prompt in real time for sensitive data such as API keys, passwords, SSNs, private keys, and credit card numbers. By running locally on a developer's machine, Prompt Guard provides visibility into what data is being transmitted to AI services—a growing concern as developers increasingly rely on AI tools that automatically include editor context in their requests.
The proxy operates as a transparent HTTPS interceptor using a locally-generated certificate authority, requiring only environment variable configuration or IDE settings to activate. It includes 12 built-in detection rules for common credential patterns (AWS keys, OpenAI/Anthropic API tokens, GitHub tokens, PEM private keys, and PII), with severity levels ranging from High to Low. Notably, Prompt Guard operates in inspect-only mode by default, flagging sensitive data without blocking prompts—allowing users to see alerts while still maintaining normal API communication. The tool features a web dashboard for monitoring flagged prompts in real time, SQLite persistence for audit logging, and ships as a single binary with no runtime dependencies, making it easy to deploy across macOS, Linux, and Windows.
- Addresses a critical security gap where AI coding assistants automatically include editor context containing secrets in their requests to third-party servers
Editorial Opinion
Prompt Guard addresses a genuine and often-overlooked security vulnerability in the AI development workflow: the automatic transmission of editor context to third-party AI services. As developers increasingly rely on AI coding assistants like Copilot and ChatGPT, the unintended leakage of credentials and sensitive data has become a significant risk. This open-source solution provides a practical, zero-friction way to gain visibility into what's being sent to AI APIs without disrupting productivity. However, the inspect-only default mode highlights an important design philosophy—giving developers agency and transparency rather than making security decisions for them. For enterprises concerned about credential exposure to AI vendors, this tool represents an elegant local-first approach to the problem.



