Anthropic Releases /usage Command and Publishes Guide to Optimizing Claude Code with 1M Context Window
Key Takeaways
- ▸Anthropic launched /usage command to provide transparency on token consumption in Claude Code sessions
- ▸The 1M context window enables longer tasks but introduces context rot—performance degradation as context grows—requiring strategic management
- ▸Users have five distinct options at each turn: continue, rewind, clear/new session, compact, or delegate to subagents, each suited for different scenarios
Summary
Anthropic has released a new /usage slash command for Claude Code to help developers understand their token usage, addressing feedback from customer conversations about session and context management. The company has also published a comprehensive guide explaining how to effectively manage sessions, context windows, and compaction strategies with Claude Code's newly expanded 1 million token context window. The guide addresses a critical issue called "context rot"—the degradation of model performance as context grows—and provides practical strategies for when to continue sessions, rewind, start new sessions, compact, or delegate to subagents. Anthropic emphasizes that strategic context management significantly shapes user experience and results with Claude Code, and provides a rule of thumb that new tasks should generally start in new sessions to mitigate context rot effects.
- Best practice is to start a new session when beginning a new task, while considering related tasks where partial context reuse may be beneficial
- Rewinding to a previous point and re-prompting is often more effective than in-context corrections for handling failed approaches
Editorial Opinion
Anthropic's emphasis on practical context management guidance reflects a maturation in how AI companies should support developers using large context windows. By publicly documenting session management strategies and introducing usage visibility tools, Anthropic is tackling a real usability challenge that emerges at scale. This approach—combining better observability with educational content—sets a precedent for responsible scaling of AI capabilities.

