BotBeat
...
← Back

> ▌

AnthropicAnthropic
INDUSTRY REPORTAnthropic2026-04-08

AMD AI Director Reports Claude Code Performance Degradation Since February Update

Key Takeaways

  • ▸AMD's AI director documented a measurable decline in Claude Code's performance since a March 8th update, with 'laziness' indicators rising from zero to 10 per day on average
  • ▸Analysis of over 234,000 tool calls shows Claude now reads code less thoroughly (2 reads vs. 6.6 previously) and increasingly rewrites entire files instead of making targeted edits
  • ▸The degradation correlates with the rollout of thinking content redaction in version 2.1.69, which hides the model's reasoning process from users
Source:
Hacker Newshttps://www.theregister.com/2026/04/06/anthropic_claude_code_dumber_lazier_amd_ai_director/↗

Summary

Stella Laurenzo, director of the AI group at AMD, filed a GitHub issue alleging that Claude Code has significantly degraded in performance since early March, becoming "dumber and lazier" in executing complex engineering tasks. Based on analysis of 6,852 Claude Code sessions with 234,760 tool calls, Laurenzo's team documented a sharp increase in "laziness" indicators—such as stop-hook violations and permission-seeking behavior—rising from zero prior to March 8th to an average of 10 per day by month's end. The team also observed that Claude began reading code less frequently (dropping from 6.6 average reads to 2) and increasingly rewrote entire files rather than making targeted edits.

Laurenzo attributes the decline to Anthropic's early March deployment of thinking content redaction in Claude Code version 2.1.69, which strips thinking content from API responses by default. This change, she argues, causes the model to default to "the cheapest action available: edit without reading, stop without finishing, dodge responsibility for failures." Laurenzo is calling for Anthropic to be transparent about whether it is reducing or capping thinking tokens, and has requested features such as token usage exposure per request and a maximum thinking tier for complex workflows. The complaint joins other recent criticisms of Claude Code, including unexplained token usage surges and the exposure of its source code.

  • Laurenzo is demanding greater transparency from Anthropic regarding thinking token usage and a new 'max thinking' subscription tier for complex engineering workflows

Editorial Opinion

The allegations from a senior engineer at a major chipmaker like AMD carry significant weight and suggest potential systemic issues with how Anthropic is managing Claude Code's reasoning capabilities. If Anthropic has indeed implemented changes that reduce model depth-of-thought as a cost-cutting measure, transparency about those trade-offs is essential—users deserve to know whether they're getting the capability they're paying for. The combination of performance complaints, token usage surprises, and now evidence of reduced reasoning depth threatens to erode confidence in Claude Code as a reliable tool for professional engineering work.

Large Language Models (LLMs)AI AgentsEthics & BiasAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
PARTNERSHIP

Rakuten Accelerates Development Velocity with Anthropic's Claude Managed Agents

2026-04-08
AnthropicAnthropic
POLICY & REGULATION

US Court Declines to Block Pentagon's Anthropic Blacklisting for Now

2026-04-08
AnthropicAnthropic
POLICY & REGULATION

D.C. Circuit Court Declines to Stay DoW's Supply-Chain Risk Designation of Claude, Rejecting Anthropic's Emergency Appeal

2026-04-08

Comments

Suggested

OpenOriginsOpenOrigins
PRODUCT LAUNCH

OpenOrigins Launches App to Verify Photo Authenticity and Combat AI-Generated Images

2026-04-08
AnthropicAnthropic
PARTNERSHIP

Rakuten Accelerates Development Velocity with Anthropic's Claude Managed Agents

2026-04-08
AnthropicAnthropic
POLICY & REGULATION

US Court Declines to Block Pentagon's Anthropic Blacklisting for Now

2026-04-08
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us