Zed Launches Zeta2, Rebuilt Edit Prediction Model with 30% Better Acceptance Rate
Key Takeaways
- ▸Zeta2 achieves 30% better acceptance rate than Zeta1, powered by scaling training data from 500 to nearly 100,000 examples
- ▸The model now leverages LSP-based context retrieval to understand type definitions and symbols across the codebase, eliminating guesswork
- ▸Zeta2 is open-weight and available on Hugging Face, trained on data from users who actively opted in, emphasizing transparency and user consent
Summary
Zed has unveiled Zeta2, a significantly improved version of its edit prediction model that delivers a 30% better acceptance rate than its predecessor Zeta1 and is now the default model for all Zed users. While maintaining the same core architecture, the team rebuilt everything surrounding it—including context building, training methodology, evaluation processes, and feedback incorporation. The most substantial improvements stem from a revamped training pipeline that scales from Zeta1's hand-curated 500 examples to nearly 100,000 examples collected on an opt-in basis from Zed users in open-source licensed repositories. The model also benefits from LSP-based context retrieval, allowing it to access type definitions and symbol information across the codebase rather than relying solely on local context around the cursor.
Zeta2 represents a shift toward sustainable, scalable model development. The latest iteration is available as open-weight on Hugging Face, trained entirely on open-source code from users who actively consented to data sharing. Latency has also improved as a byproduct of the pipeline optimization, delivering faster predictions to users. Zed's roadmap includes continuous improvements via Direct Preference Optimization (DPO) and prompt format experimentation, alongside planned features like "jumps" that will suggest fixes at error locations flagged by language servers. The company remains committed to closing the quality gap with larger models while maintaining transparency and user control over training data.
- Latency improvements provide faster edit predictions, and the team is experimenting with Direct Preference Optimization (DPO) for continued enhancement
- Zed supports multiple edit prediction providers including Copilot Next-Edit and Mercury Coder, giving users choice in prediction quality and latency trade-offs
Editorial Opinion
Zeta2 demonstrates a pragmatic approach to building production AI models at scale—prioritizing quality training data, transparent data practices, and iterative improvement over raw model size. By publishing the open-weight model and committing to continuous optimization via DPO, Zed is setting a credible standard for how code completion tools can scale responsibly while remaining developer-friendly. The shift from hand-crafted datasets to opt-in community data collection shows how real-world feedback loops can meaningfully improve AI products without sacrificing user trust.



