BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-04-29

Benchmark: Opus 4.7 Costs 80% More in Default Settings, But Tool Design Reshapes Economics

Key Takeaways

  • ▸Opus 4.7 costs 80% more per run vs 4.6 on default Claude Code settings ($11.62 → $20.92), with per-turn costs rising 2.6x
  • ▸WOZCODE users see only 12% cost increase, with savings expanding from 41% to 63% of baseline costs
  • ▸Smarter models amplify tools designed for batching and planning; the 80% increase reflects a mismatch between improved planning capability and sequential tool design
Source:
Hacker Newshttps://www.wozcode.com/blog/opus-4-7-pricing↗

Summary

Anthropic's new Opus 4.7 model carries a significant price increase: approximately 80% higher costs per run compared to Opus 4.6 when using Claude Code's default settings. Benchmarking data shows vanilla costs jumped from $11.62 to $20.92 per run, with per-turn costs rising 2.6x from roughly $0.05 to $0.13. However, users running WOZCODE, a token-saving plugin for Claude Code, see only a 12% increase ($6.88 to $7.73), with savings expanding from 41% to 63% of baseline costs.

The gap reveals a critical insight about LLM economics: smarter models amplify the value of tools designed around planning and batching. WOZCODE replaces Claude Code's built-in tools with optimized versions that enable combined operations (e.g., searching and reading in a single call), batched edits across multiple files, and intelligent task delegation. Traditional tools lack this capability—there's no "plan ten edits" operation for a planner to delegate to. As a result, vanilla Claude Code on 4.7 simply spends more thinking tokens per turn at higher default settings, multiplying the cost without behavioral change.

With 4.7's improved planning ability, WOZCODE's advantage becomes more pronounced. The same benchmark suite that required 128 turns on 4.6 now completes in just 52 turns on 4.7—a 59% reduction. The smarter model recognizes when to batch operations, when simplified code signatures suffice, and when to delegate subtasks to specialized agents, enabling fundamentally different (more efficient) interaction patterns.

  • WOZCODE on 4.7 completes benchmarks in 52 turns vs 128 on 4.6—same tools, but 4.7's better planning halves required interactions

Editorial Opinion

This benchmark reveals a fundamental principle about LLM economics: as models become smarter, they enable new interaction patterns that fundamentally change cost dynamics. The 80% price increase for vanilla Opus 4.7 isn't inevitable—it emerges from a mismatch between the model's improved planning capability and tools designed for sequential interaction. For organizations building on LLMs, the takeaway is clear: cost efficiency comes not from avoiding capable models, but from aligning tool design and interaction patterns with what those models can plan for.

Large Language Models (LLMs)AI AgentsMarket TrendsProduct Launch

More from Anthropic

AnthropicAnthropic
RESEARCH

Anthropic Researchers Introduce 'Introspection Adapters' for Detecting Model Misalignment

2026-04-29
AnthropicAnthropic
POLICY & REGULATION

'The Biggest Decision Yet': Anthropic's Kaplan Warns Humanity Must Choose on AI Autonomy by 2030

2026-04-29
AnthropicAnthropic
POLICY & REGULATION

A 90-Year-Old Regulatory Model Could Solve AI's Safety Race-to-the-Bottom

2026-04-29

Comments

Suggested

AnthropicAnthropic
RESEARCH

Anthropic Researchers Introduce 'Introspection Adapters' for Detecting Model Misalignment

2026-04-29
IBMIBM
RESEARCH

IBM Releases Granite 4.1: Dense LLMs That Match Larger Models Through Rigorous Data Curation

2026-04-29
JetBrainsJetBrains
PRODUCT LAUNCH

JetBrains Announces 2026 AI Strategy: Agent Client Protocol and Multi-Provider Support

2026-04-29
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us