BotBeat
...
← Back

> ▌

DeepSeekDeepSeek
PRODUCT LAUNCHDeepSeek2026-05-01

DeepSeek Releases V4 Models with 1M-Token Context and Aggressive Pricing Strategy

Key Takeaways

  • ▸DeepSeek V4 models feature 1 million token context and come in two variants: Pro (1.6T/49B active) and Flash (284B/13B active)
  • ▸Pricing significantly undercuts competitors, with Flash at $0.14/$0.28 per million tokens and Pro at $1.74/$3.48
  • ▸Both models achieve dramatic efficiency improvements—27-10% of V3.2's FLOPs and KV cache size in million-token contexts
Source:
Hacker Newshttps://simonw.substack.com/p/deepseek-v4-and-the-end-of-the-openaimicrosoft↗

Summary

Chinese AI lab DeepSeek has released two preview models from its anticipated V4 series: DeepSeek-V4-Pro and DeepSeek-V4-Flash. Both feature 1 million token context windows and are built on Mixture of Experts architecture, with Pro containing 1.6 trillion total parameters (49B active) and Flash containing 284 billion total (13B active). Both models are released under the MIT license as open weights.

The models demonstrate significant efficiency improvements over V3.2, with DeepSeek-V4-Pro achieving only 27% of the single-token FLOPs and 10% of the KV cache size of its predecessor in 1M-token contexts. Most notably, the pricing is substantially lower than competitors: Flash costs $0.14-$0.28 per million tokens, making it cheaper than OpenAI's GPT-5.4 Nano, while Pro at $1.74-$3.48 per million tokens is the least expensive frontier-class model available.

According to DeepSeek's benchmarks, V4-Pro is competitive with leading models from OpenAI, Google, and Anthropic, though DeepSeek acknowledges a 3-6 month development gap from state-of-the-art. With V4-Pro being the largest open weights model to date and both models using MIT licensing, the release represents a significant democratization of access to capable large language models.

  • Released under MIT license as open weights, democratizing access to frontier-class model capabilities
  • V4-Pro performance is competitive with leading models despite being 3-6 months behind state-of-the-art

Editorial Opinion

DeepSeek V4 represents a watershed moment in the LLM market: frontier-class performance at commodity pricing, with open weights licensing. The dramatic cost advantage and efficiency gains could force significant price competition across the industry and accelerate deployment of capable AI systems globally. While the acknowledged developmental gap from pure frontier models suggests room for continued improvement, DeepSeek has effectively decoupled price from capability—a shift that threatens the incumbent business models of OpenAI, Google, and Anthropic.

Large Language Models (LLMs)Generative AIMarket TrendsProduct LaunchOpen Source

More from DeepSeek

DeepSeekDeepSeek
PRODUCT LAUNCH

DeepSeek V4: How a 200-Person Chinese Team Built a Superior AI Model on a Fraction of Big Tech's Budget

2026-05-01
DeepSeekDeepSeek
RESEARCH

Finetuning Unlocks Verbatim Memorization of Copyrighted Books in Large Language Models

2026-04-30
DeepSeekDeepSeek
PRODUCT LAUNCH

DeepSeek Releases V4 with Million-Token Context Optimized for AI Agents

2026-04-28

Comments

Suggested

AnthropicAnthropic
PARTNERSHIP

Anthropic Donates to Blender Foundation, Pivots Away from Development Fund Membership Amid Community AI Concerns

2026-05-01
AI Industry / General-Purpose LLMsAI Industry / General-Purpose LLMs
INDUSTRY REPORT

LLMs Don't Understand BGP. Here's What It Takes to Change That

2026-05-01
OpenAIOpenAI
UPDATE

OpenAI Launches Advanced Account Security with Passkeys and Training Opt-Out

2026-05-01
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us