BotBeat
...
← Back

> ▌

MinimaxMinimax
RESEARCHMinimax2026-03-26

Final Training Runs Account for Only 10-30% of AI R&D Compute Spending, Analysis Shows

Key Takeaways

  • ▸Final training runs represent only 10-30% of total R&D compute spending across OpenAI, MiniMax, and Z.ai—the majority goes to exploration and experimentation
  • ▸The full cost of developing an AI model is significantly higher than the cost of the final training run alone, with most spending on scaling experiments, synthetic data generation, and research
  • ▸This pattern holds across companies of different sizes, countries, and business models, suggesting it may be a fundamental characteristic of frontier AI development
Source:
Hacker Newshttps://epochai.substack.com/p/final-training-runs-account-for-a↗

Summary

A new analysis of compute spending patterns at major AI companies reveals that final training runs—the computational work that produces released models—account for a minority of total R&D compute expenditure. According to research from Epoch AI, OpenAI spent approximately $5 billion on R&D compute in 2024, with only about $500 million (roughly 10%) allocated to final training runs that produced released models like GPT-4.5. The remainder was devoted to scaling experiments, synthetic data generation, basic research, and other exploratory workloads.

The pattern holds across companies of different scales and geographies. Analysis of recent IPO disclosures from Chinese AI companies MiniMax and Z.ai shows that both firms similarly allocate a minority of their R&D compute budgets to final training runs, despite operating at smaller scales than OpenAI. This distinction has significant implications for understanding AI development costs and competitive dynamics: while the headline cost of training a frontier model may be hundreds of millions of dollars, the full development cost is substantially higher. Conversely, competitors who can learn from frontier results in theory require less exploratory compute and should achieve higher ratios of compute devoted to actual training.

  • Followers of frontier AI may be able to replicate results with substantially less compute if they can learn what approaches work without repeating all exploration phases

Editorial Opinion

This analysis provides crucial nuance to public discourse around AI training costs and compute requirements. The distinction between final training compute and total R&D compute spending fundamentally reframes how we should think about AI development economics and competitive advantages. If true across the industry, this pattern suggests that the real barrier to entry for frontier AI is not just the compute capacity to train large models, but the institutional knowledge and experimentation resources to identify which approaches are worth scaling—a factor that may be harder to quantify but potentially more defensible than raw computational power.

Machine LearningData Science & AnalyticsScience & ResearchMarket Trends

More from Minimax

MinimaxMinimax
OPEN SOURCE

Aurora: Open-Source RL Framework Enables Real-Time Adaptive Speculative Decoding for LLM Inference

2026-03-31
MinimaxMinimax
RESEARCH

MiniMax M2.5 Tops SWE-bench Leaderboard, Outperforming Claude Opus at Fraction of Cost

2026-03-02

Comments

Suggested

Research CommunityResearch Community
RESEARCH

TELeR: New Taxonomy Framework for Standardizing LLM Prompt Benchmarking on Complex Tasks

2026-04-05
N/AN/A
RESEARCH

Machine Learning Model Identifies Thousands of Unrecognized COVID-19 Deaths in the US

2026-04-05
Bevel HealthBevel Health
FUNDING & BUSINESS

WHOOP Files Lawsuit Against Bevel Health in Competitive Dispute

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us