Inside Uber's Internal AI Stack: How the Rideshare Giant Built Agentic Tools to Scale Developer Productivity
Key Takeaways
- ▸Uber's internal AI adoption has reached 92% of developers monthly, with AI authoring 31% of code—demonstrating massive scale adoption at a large enterprise
- ▸The company built a four-layer agentic AI stack spanning an internal platform, context sources, industry tools, and specialized agents for testing and code review
- ▸AI-related costs have ballooned 6x since 2024, creating new operational challenges around token optimization and resource management
Summary
Uber has built an extensive internal AI stack designed to augment its nearly 3,000-person engineering organization, featuring agentic tools like Minion (background agent platform), Shepherd (migration management), uReview (code review), and others. The company has achieved remarkable adoption metrics, with 92% of developers using AI agents monthly and 31% of code being AI-authored, alongside 11% of pull requests opened by agents. However, the rapid expansion has surfaced new challenges: AI-related costs have increased 6x since 2024, making token cost optimization a growing priority, and adoption has proven slower than expected despite top-down initiatives being less effective than peer-driven adoption.
Uber's approach reflects a broader shift in developer workflows—moving away from single-threaded coding in traditional IDEs toward orchestrating multiple parallel AI agents. The company developed several supporting tools including MCP Gateway, Uber Agent Builder, and AIFX CLI to streamline agent usage and effectiveness. Internal tools like Code Inbox, Autocover (which generates 5,000+ unit tests monthly), and Shepherd manage the downstream consequences of increased AI-authored code, including code review bottlenecks and large-scale migration complexity.
- Adoption has been slower than expected even at forward-thinking companies like Uber; peer-driven sharing of wins proves more effective than top-down mandates
- New supporting tools like Minion, uReview, and Shepherd were necessary to manage increased code review load and complexity from AI-generated code
Editorial Opinion
Uber's transparent disclosure of its AI-powered developer tools offers a rare behind-the-scenes look at how large-scale AI integration actually works in practice. What's most striking isn't the 92% adoption rate, but the honest acknowledgment of challenges: 6x cost increases and slower-than-expected adoption suggest the "AI-powered company" narrative is more complex than headlines suggest. The shift from individual developer productivity to managing parallel agentic systems represents a genuine paradigm change that will reshape engineering culture for years to come.



