ClickHouse Shares Real-World Experience with AI Coding Agents, Highlighting Both Promise and Limitations
Key Takeaways
- ▸Claude Code and AI agents excel at boilerplate, internal tools, and small applications, but still struggle with large-scale C++ backend development
- ▸Claude Sonnet 4.5 (September 2025) marked a significant quality jump, enabling more sophisticated internal tool development in production environments
- ▸Enterprise-grade AI coding adoption requires contracts, security vetting, observability infrastructure, and custom agent development—a multi-tool, multi-vendor approach
Summary
ClickHouse, a major open-source database platform, has shared an in-depth case study on its practical experience deploying AI coding agents, including Claude Code from Anthropic, across the organization. The company has successfully integrated agents for boilerplate tasks, internal tooling, performance testing, and dashboards, citing Claude Sonnet 4.5 (September 2025) as a watershed moment in capability. A concrete example: the Team Productivity Dashboard was built in a single session using 112 prompts. However, ClickHouse remains cautiously skeptical about agents' effectiveness on complex backend C++ codebases, reflecting the broader industry tension between AI enthusiasm and practical limitations.
The article cuts through polarized discourse about AI-assisted coding by acknowledging legitimate use cases alongside real constraints. ClickHouse has signed contracts with multiple vendors (Anthropic, Windsurf, and Cursor), built custom agents (DWAINE, CAISER, TRAISA), and acquired observability tools (Librechat, Langfuse) to operationalize AI coding at scale. The author emphasizes that coding agents are tools suited for specific scenarios—not universal solutions—and that effectiveness depends heavily on codebase complexity and language choice.
- Coding agents are incremental force multipliers for specific workflows, not job replacements, and skepticism about their universal applicability is justified
Editorial Opinion
ClickHouse's candid assessment is exactly what the industry needs: a grounded, non-ideological view of where AI coding agents actually deliver value. Rather than breathless hype or dismissive skepticism, they demonstrate that agents are specialized tools—invaluable for busting through boilerplate drudgery, genuinely useful for internal tooling, but still fallible on deeply complex systems. Their willingness to invest while remaining skeptical about C++ codebases sets a healthy example for engineers navigating the real tradeoffs of AI-assisted development.

