VibeLens: Open-Source Tool for Visualizing and Auditing AI Agent Sessions
Key Takeaways
- ▸VibeLens supports 11 AI coding agents (Claude, Cursor, Copilot, Gemini, and others), making it broadly compatible across the agent ecosystem
- ▸Enables session replay and step-by-step analysis to understand exactly what AI agents are doing, including tool calls and thinking blocks
- ▸Converts agent sessions into reusable skills through personalization, helping agents learn from real workflows
Summary
CHATS-Lab has launched VibeLens, an open-source tool designed to visualize, personalize, and audit sessions across multiple AI coding agents. The platform supports 11 different AI agents including Claude, Cursor, GitHub Copilot, Google Gemini, and others, enabling users to replay sessions step-by-step, analyze what agents are actually doing, and identify friction patterns in their workflows.
VibeLens offers three core capabilities: session visualization with detailed timelines showing messages, tool calls, and thinking blocks; productivity insights that detect where agents get stuck and provide concrete improvement suggestions; and personalization through skill generation, allowing users to turn real sessions into reusable skills that agents can load. The tool also includes a dashboard with usage heatmaps, cost breakdowns by model, and per-project analytics.
The tool is available immediately with a live demo requiring no installation, and can be installed locally via pip or uv (Python package manager). It auto-detects agent session formats and integrates seamlessly with existing agent environments, addressing a growing need for observability and optimization in the expanding AI agent ecosystem.
- Includes dashboard analytics for usage tracking, cost analysis, and per-project insights to optimize agent teams
- Open-source tool with live demo available now; easy local installation via uv or Python
Editorial Opinion
VibeLens addresses a critical gap in the AI agent development space—visibility into what agents are actually doing. As AI agents become more complex and teams deploy multiple agents in parallel, the ability to replay sessions, spot repeated errors, and harvest learnings into reusable skills becomes invaluable. This tool democratizes agent observability and optimization, which is essential for moving beyond 'black box' agent behavior to trustworthy, auditable automation.



