One Agent SDK Launches Unified Interface for Claude Code, ChatGPT Codex, and Kimi-CLI
Key Takeaways
- ▸One Agent SDK provides a unified TypeScript interface for building coding agents that work across Claude Code, ChatGPT Codex, and Kimi-CLI without code changes
- ▸The SDK standardizes streaming output, tool definitions, and multi-agent handoffs across different LLM providers, eliminating the need to learn multiple APIs
- ▸Unlike traditional API-based approaches, the SDK runs agents in-process by spawning CLI subprocesses, requiring no API keys and reducing latency
Summary
Developer odysa has released One Agent SDK, an open-source TypeScript library that provides a unified, provider-agnostic interface for building LLM-powered coding agents. The SDK addresses a common pain point for developers: each major AI provider—Anthropic's Claude Code, OpenAI's ChatGPT Codex, and Moonshot AI's Kimi-CLI—requires its own distinct SDK, streaming format, and tool-calling API. One Agent SDK abstracts these differences, allowing developers to write agent code once and run it across multiple backends by simply changing a configuration parameter.
The library introduces several key features designed to simplify multi-provider agent development. It offers a consistent AsyncGenerator<StreamChunk> streaming interface across all providers, a unified tool definition format using Zod for type safety, and built-in support for multi-agent handoffs and orchestration. Notably, the SDK operates without requiring API keys by spawning coding agent CLIs as subprocesses, running agents in-process rather than through external API calls. This approach reduces latency and simplifies deployment for developers working with command-line based AI coding assistants.
One Agent SDK follows a modular architecture where provider-specific dependencies are optional peer dependencies, meaning developers only install the SDKs they actually need. The project is currently available on GitHub and includes documentation, examples, and configuration files for immediate use. By standardizing the interface for interacting with different AI coding agents, the SDK aims to reduce development time and maintenance burden for teams building agent-based applications that may need to support multiple LLM providers or switch between them based on performance, cost, or availability considerations.
- Developers can switch between AI providers by changing a single configuration parameter while keeping all agent logic and tool definitions identical
Editorial Opinion
One Agent SDK addresses a genuine friction point in the rapidly fragmenting AI tooling ecosystem—the cognitive overhead of maintaining separate codebases for different providers. As more companies adopt multi-vendor AI strategies for redundancy and cost optimization, abstraction layers like this become increasingly valuable. However, the SDK's reliance on CLI-based agents rather than direct API integration may limit its applicability for production scenarios requiring fine-grained control, authentication, or cloud-native deployment patterns. The project's success will likely depend on how well it keeps pace with the rapid evolution of underlying provider SDKs and whether the abstraction layer introduces meaningful performance or capability tradeoffs.


