BotBeat
...
← Back

> ▌

GitHubGitHub
UPDATEGitHub2026-04-15

GitHub Copilot Customers Revolt Over Aggressive Rate Limits Following Token Counting Bug Fix

Key Takeaways

  • ▸GitHub discovered a critical token counting bug that underestimated usage from newer AI models, masking true infrastructure costs and breaking the company's pricing model
  • ▸Newly implemented rate limits are causing customer backlash, with some users facing 44-hour to multi-day restrictions despite paying premium subscriptions
  • ▸GitHub has also suspended Copilot Pro free trials due to abuse and retired Anthropic's Opus 4.6 Fast model for Pro+ users as cost-control measures
Sources:
Hacker Newshttps://www.theregister.com/2026/04/15/github_copilot_rate_limiting_bug/↗
Hacker Newshttps://github.com/orgs/community/discussions/180092↗
Hacker Newshttps://github.com/orgs/community/discussions/192435↗

Summary

Microsoft's GitHub has implemented strict rate limits on Copilot users following the discovery of a token counting bug that significantly undercounted usage from newer AI models like Claude Opus 4.6 and GPT-5.4. The bug masked the true infrastructure costs, leading to unexpectedly high consumption that strained company servers. Customers now report facing rate limits ranging from hours to multiple days, with some premium users spending hundreds of pounds monthly on credits finding their service access severely restricted. The situation has prompted widespread complaints in GitHub Copilot community forums, with users expressing frustration over the sudden shift from an all-you-can-eat pricing model to aggressive throttling that appears designed to control unexpected costs.

  • Similar capacity and cost management issues are affecting competitors including Anthropic and OpenAI's Codex, indicating broader industry challenges with AI service economics

Editorial Opinion

GitHub's handling of its token counting bug reveals the precarious economics underlying AI-as-a-service models when pricing assumptions break down. While infrastructure strain is legitimate, the aggressive and seemingly opaque rate limiting—with some users facing 44-hour lockouts—risks eroding customer trust and suggests the company struggled to communicate transparently about the underlying cost problem. This situation underscores a critical industry lesson: as AI models become more capable and expensive to run, all-you-can-eat pricing models are unsustainable without robust usage monitoring and honest customer communication from day one.

Large Language Models (LLMs)Generative AIAI AgentsEarnings & FinancialsMarket TrendsRegulation & PolicyJobs & Workforce ImpactProduct Launch

More from GitHub

GitHubGitHub
PRODUCT LAUNCH

GitHub Launches Season 4 of Secure Code Game Focused on AI Agent Security

2026-04-15
GitHubGitHub
UPDATE

GitHub Pauses Copilot Pro Free Trials Amid Abuse Investigation

2026-04-15
GitHubGitHub
UPDATE

GitHub Copilot CLI Introduces Fleet Mode: Run Multiple AI Sub-Agents in Parallel

2026-04-14

Comments

Suggested

OpenAIOpenAI
RESEARCH

OpenAI's GPT-5.4 Pro Solves Longstanding Erdős Math Problem, Reveals Novel Mathematical Connections

2026-04-17
AnthropicAnthropic
PARTNERSHIP

White House Pushes US Agencies to Adopt Anthropic's AI Technology

2026-04-17
AnthropicAnthropic
RESEARCH

AI Safety Convergence: Three Major Players Deploy Agent Governance Systems Within Weeks

2026-04-17
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us