BotBeat
...
← Back

> ▌

Mistral AIMistral AI
PRODUCT LAUNCHMistral AI2026-04-29

Mistral Launches Medium 3.5 Model and Cloud-Based Coding Agents

Key Takeaways

  • ▸Mistral Medium 3.5 is a 128B open-weight flagship model with 256K context window, achieving 77.6% on SWE-Bench Verified and strong agentic capabilities
  • ▸Remote coding agents now execute asynchronously in the cloud via Mistral Vibe, able to run in parallel while keeping developers informed of progress
  • ▸Work mode in Le Chat (Preview) enables multi-step task execution with tool-calling and structured output powered by Medium 3.5
Source:
Hacker Newshttps://mistral.ai/news/vibe-remote-agents-mistral-medium-3-5↗

Summary

Mistral announced Mistral Medium 3.5, a new 128B flagship open-weight model released under a modified MIT license. Merging instruction-following, reasoning, and coding capabilities into a single dense model with a 256K context window, it achieves 77.6% on SWE-Bench Verified and 91.4 on τ³-Telecom—outperforming Devstral 2 and competing with larger models. The model is designed for self-hosting on as few as four GPUs, with configurable reasoning effort per request to balance latency and capability.

Central to the announcement is the introduction of remote coding agents in Mistral Vibe, a paradigm shift that moves agent execution from local laptops to cloud infrastructure. Developers can spawn async coding tasks from the Vibe CLI or Le Chat that run in parallel and notify users upon completion. Local CLI sessions can be "teleported" to the cloud while preserving context and approvals, and each agent runs in an isolated sandbox with integrations to GitHub, Jira, Linear, Sentry, and communication tools like Slack and Teams.

Mistral also previewed Work mode in Le Chat, a new agentic interface powered by Medium 3.5 that handles complex multi-step tasks including research, analysis, and cross-tool orchestration. The model replaces Devstral 2 as the default in both Le Chat and Vibe CLI, reflecting Mistral's confidence in its coding and agentic performance. The architecture positions humans in the loop wherever judgment is needed, targeting high-volume, well-defined work like refactors, test generation, dependency upgrades, and CI investigations.

  • Model supports self-hosting on four GPUs, released under modified MIT license, with configurable reasoning effort for flexible performance-latency trade-offs

Editorial Opinion

Mistral's shift of coding agents to cloud infrastructure addresses a real pain point in developer workflows—the need to babysit long-running tasks. The combination of open weights, transparent licensing, and self-hosting viability makes this a compelling alternative to proprietary cloud services. However, the practical impact hinges on reliability, latency, and whether developers trust cloud-based agents with their codebase; early enterprise adoption will be key validation.

Large Language Models (LLMs)Generative AIAI AgentsProduct LaunchOpen Source

More from Mistral AI

Mistral AIMistral AI
PRODUCT LAUNCH

Mistral Launches Workflows: Enterprise-Grade AI Orchestration Platform Now in Public Preview

2026-04-29
Mistral AIMistral AI
INDUSTRY REPORT

Mistral Hits $14B Valuation by Positioning as the Non-American AI Alternative

2026-04-27
Mistral AIMistral AI
PARTNERSHIP

SpaceX and Cursor Explore Partnership with Mistral to Compete in AI Market

2026-04-23

Comments

Suggested

Anysphere (Cursor)Anysphere (Cursor)
PRODUCT LAUNCH

Cursor Launches Public Beta SDK for Building and Deploying AI Coding Agents

2026-04-29
OpenClawOpenClaw
INDUSTRY REPORT

30 OpenClaw Skills Weaponized for Crypto Swarm Without User Consent

2026-04-29
AnthropicAnthropic
PARTNERSHIP

Bear Notes 2.8 Integrates Claude Through New CLI, Connector, and MCP Server

2026-04-29
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us