BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-03-14

Anthropic Faces Backlash Over Silent A/B Testing of Claude Code Features Without User Consent

Key Takeaways

  • ▸Anthropic is running silent A/B tests on Claude Code that assign users to restrictive feature variants without notification, consent, or visibility
  • ▸The "tengu_pewter_ledger" test includes four variants with increasing restrictions on plan generation, with the most aggressive 'cap' variant severely limiting output length and removing explanatory content
  • ▸The practice contradicts Anthropic's stated commitment to transparency and responsible AI deployment, particularly concerning for a $200/month professional tool requiring user control and understanding
Source:
Hacker Newshttps://backnotprop.com/blog/do-not-ab-test-my-workflow/↗

Summary

A security researcher has uncovered undisclosed A/B testing within Anthropic's Claude Code product, revealing that paying users are being enrolled in experiments that degrade core functionality without their knowledge or consent. By decompiling the Claude Code binary, the researcher discovered a GrowthBook-managed test called "tengu_pewter_ledger" that controls how the plan mode feature generates outputs, with four progressively restrictive variants ranging from full-featured to severely limited. The most aggressive variant, labeled "cap," was assigned to the researcher without notification, hard-capping plans at 40 lines, removing context sections and prose explanations, and presenting users with only terse bullet points instead of collaborative planning workflows.

The researcher emphasized the contradiction between Anthropic's positioning as an AI safety company and its adoption of silent experimentation practices reminiscent of Meta's engagement-optimization culture. The telemetry system logs variant assignments and usage metrics without user awareness, raising transparency concerns for a product marketed as a professional tool requiring human-in-the-loop control. While acknowledging that Anthropic likely intends to optimize rather than intentionally degrade experience, the researcher argues that deploying such changes silently to paying subscribers contradicts the principles of responsible AI development and denies users agency over their workflow.

  • The incident reflects tension between product optimization cultures and AI safety principles, raising questions about user agency in AI-assisted workflows

Editorial Opinion

This revelation exposes a troubling gap between Anthropic's public commitment to responsible AI development and its actual product practices. While A/B testing is standard in software, conducting silent experiments on paying professional users—especially when variants demonstrably degrade usability—violates the transparency and user agency principles that should define AI safety leadership. For tools like Claude Code where human-in-the-loop collaboration is the core value proposition, removing visibility into how the system makes decisions actively undermines that premise. Anthropic must implement mandatory opt-in testing, clear variant notifications, and user controls for professional-tier products.

AI AgentsEthics & BiasAI Safety & AlignmentPrivacy & Data

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us