BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-04-15

Anthropic's Claude Faces Critical Usage Limit Drain Bug Across All Paid Tiers; Customer Demands Better Communication

Key Takeaways

  • ▸A critical bug causing abnormal usage drain has affected all Claude paid tiers for over a week with no official communication from Anthropic
  • ▸The issue appears to be a regression introduced after version 2.1.34, which users have confirmed resolves the problem when downgraded to
  • ▸Anthropic's silence and lack of response to support channels has generated more reputational damage than the technical bug itself, with the customer criticizing the absence of status updates, communication, or remediation offers
Source:
Hacker Newshttps://github.com/anthropics/claude-code/issues/41930↗

Summary

A critical bug affecting Claude across all paid subscription tiers has been draining usage limits abnormally since March 23, 2026, impacting hundreds of users who report consuming 2% of their monthly quota on single interactions. A frustrated paying customer, unable to reach Anthropic through support tickets, Twitter, or other channels over three days, filed a public GitHub issue detailing the widespread service degradation and demanding transparency. The developer notes that community members have already reverse-engineered the root cause over a weekend, identified a regression introduced after version 2.1.34, and suggested that standard deployment practices like canary releases and rollbacks could have prevented the issue from reaching the entire user base. The incident has trended on Hacker News and received coverage from multiple tech publications, with the financial impact significant for users paying $20–$200 monthly for degraded service.

  • The incident highlights gaps in Anthropic's incident response practices, including lack of A/B testing, staged rollouts, and standard DevOps protocols that would have caught the issue before widespread deployment

Editorial Opinion

While software bugs are inevitable, Anthropic's handling of this incident—or rather, the lack thereof—represents a significant disconnect between product quality and customer trust management. A company that positions itself as responsible AI steward must extend that responsibility to its paying users; silence during a service-impacting outage is not a technical failure but a communication and operations failure. The fact that users have independently diagnosed and remediated the issue while awaiting any official acknowledgment raises hard questions about incident response maturity at scale.

Large Language Models (LLMs)Regulation & PolicyPrivacy & DataJobs & Workforce Impact

More from Anthropic

AnthropicAnthropic
PARTNERSHIP

White House Pushes US Agencies to Adopt Anthropic's AI Technology

2026-04-17
AnthropicAnthropic
RESEARCH

AI Safety Convergence: Three Major Players Deploy Agent Governance Systems Within Weeks

2026-04-17
AnthropicAnthropic
PRODUCT LAUNCH

Finance Leaders Sound Alarm as Anthropic's Claude Mythos Expands to UK Banks

2026-04-17

Comments

Suggested

OpenAIOpenAI
RESEARCH

OpenAI's GPT-5.4 Pro Solves Longstanding Erdős Math Problem, Reveals Novel Mathematical Connections

2026-04-17
AnthropicAnthropic
PARTNERSHIP

White House Pushes US Agencies to Adopt Anthropic's AI Technology

2026-04-17
AnthropicAnthropic
PRODUCT LAUNCH

Finance Leaders Sound Alarm as Anthropic's Claude Mythos Expands to UK Banks

2026-04-17
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us