BotBeat
...
← Back

> ▌

AnthropicAnthropic
UPDATEAnthropic2026-04-24

Anthropic Issues Engineering Postmortem After Claude Memory Bug Affects User Experience

Key Takeaways

  • ▸A session idle handling bug caused Claude to forget conversation context after 60 minutes of inactivity, resulting in repetitive and less intelligent responses
  • ▸Three separate changes—reasoning effort reduction, context-clearing bug, and verbosity prompt—compounded performance degradation, with the verbosity change alone causing a 3% intelligence drop
  • ▸Anthropic rapidly addressed all issues between April 7-20, reverting changes and deploying fixes, demonstrating commitment to prioritizing performance over cost savings
Source:
Hacker Newshttps://www.aiuniverse.news/claudes-memory-lapse-a-bug-erased-its-reasoning-after-an-hour/↗

Summary

Anthropic has released a detailed postmortem addressing a significant bug that caused Claude to forget conversation context after 60 minutes of inactivity. The issue stemmed from three separate changes across Claude Code, the Claude Agent SDK, and Claude Cowork, including a session idle handling bug that repeatedly cleared older thought processes and a system prompt change aimed at reducing verbosity. While the Anthropic API itself remained unaffected, users experienced noticeable degradation in Claude's performance, including repetitive responses and reduced coding quality.

The root causes involved multiple missteps: a deliberate shift to medium reasoning effort (from high) on March 4 to improve efficiency, a context-clearing bug introduced on March 26, and a verbosity-reduction prompt deployed on April 16 that caused a 3% intelligence drop in evaluations. Anthropic has since reversed all problematic changes, with fixes deployed between April 7 and April 20, and usage limits reset on April 23. The incident highlights the tension between cost optimization and consistent AI performance, particularly as user expectations for reliable AI agent functionality continue to rise.

  • The incident underscores the challenge AI providers face balancing efficiency and latency against consistent intelligent performance and user trust

Editorial Opinion

Anthropic's transparent postmortem reveals a cautionary tale about optimizing AI systems prematurely. While cost and efficiency improvements are necessary business considerations, this incident demonstrates that cutting corners on reasoning depth or context retention can severely damage user trust—especially as competitors like OpenAI offer more reliable long-context performance. The silver lining is Anthropic's swift response and willingness to reverse unpopular changes, but the episode raises important questions about how AI companies validate performance trade-offs before rolling them out to millions of users.

Large Language Models (LLMs)AI AgentsMLOps & InfrastructureJobs & Workforce Impact

More from Anthropic

AnthropicAnthropic
POLICY & REGULATION

Pentagon's Flawed Targeting Process, Not AI, Responsible for Civilian Casualties, Analysis Shows

2026-04-23
AnthropicAnthropic
UPDATE

Anthropic Restricts Claude Code Feature to Higher-Tier Plans, Excluding Personal Max

2026-04-23
AnthropicAnthropic
UPDATE

Anthropic Launches Memory Feature for Claude Managed Agents in Public Beta

2026-04-23

Comments

Suggested

Andon LabsAndon Labs
INDUSTRY REPORT

AI-Run Retail Store in San Francisco Struggles with Basic Operations, Wage Discrimination

2026-04-24
DeepSeekDeepSeek
PRODUCT LAUNCH

DeepSeek Unveils DeepSeek-V4 with Breakthrough Million-Token Context Intelligence

2026-04-24
MemTensorMemTensor
RESEARCH

MemCoT: New Framework Tackles LLM Hallucinations and Long-Context Reasoning Through Memory-Driven Approach

2026-04-24
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us