OpenMaze Launches rtk, Open-Source CLI Proxy That Cuts AI Agent Token Usage by 90%
Key Takeaways
- ▸rtk reduces AI coding agent token consumption by an average of 89%, with some commands like cargo test seeing 99% reduction
- ▸The open-source tool extends agent session length by 3x by preventing context window pollution from verbose CLI outputs
- ▸OpenMaze estimates teams of 10 developers waste ~$1,750 monthly on unnecessary token costs from unfiltered command outputs
Summary
OpenMaze has released rtk, an open-source command-line interface proxy designed to dramatically reduce token consumption for AI coding agents. The tool compresses CLI outputs before they reach an agent's context window, removing an average of 89% of unnecessary noise from command outputs. Built in Rust and released under the MIT license, rtk has already garnered over 450 stars on GitHub.
The tool addresses a critical inefficiency in AI-assisted development: every command execution floods context windows with boilerplate and unnecessary output. For example, a typical 'cargo test' command that normally generates ~4,823 tokens is compressed to just 11 tokens by rtk—a 99% reduction. According to OpenMaze's internal metrics, rtk has processed 2,927 commands, saving 10.3 million tokens out of 11.6 million input tokens. The company estimates teams of 10 developers waste approximately $1,750 monthly on token costs from CLI noise alone.
rtk works by intelligently filtering and condensing outputs from over 30 common development commands including git, cargo, pytest, grep, and find. The tool reportedly extends AI agent sessions by 3x by preventing premature context window overflow. For teams using pay-per-token APIs like OpenAI or Anthropic's models, the savings translate directly to reduced costs, while flat-rate plan users benefit from hitting rate limits 40% slower. The compression maintains critical information—test results, errors, and warnings—while eliminating repetitive boilerplate that doesn't contribute to the agent's reasoning.
- Built in Rust under MIT license, rtk supports 30+ common development commands and has saved 10.3M tokens across 2,927 commands in testing
Editorial Opinion
rtk addresses a real infrastructure problem in AI-assisted development that most teams don't realize they have until they see the bill. The 89% average token savings aren't just about cost—they fundamentally change how long agents can maintain context and reasoning quality. However, the tool's true test will be whether its compression algorithms consistently preserve the semantic information agents actually need, or if the aggressive filtering occasionally strips away critical context that leads to incorrect suggestions.



