BotBeat
...
← Back

> ▌

OpenAIOpenAI
UPDATEOpenAI2026-04-09

Sam Altman Admits ChatGPT Can't Keep Time—Won't Be Fixed for Another Year

Key Takeaways

  • ▸ChatGPT's voice model cannot keep time or start timers, a limitation OpenAI plans to address within one year
  • ▸The issue gained viral attention after a TikTok creator demonstrated the model inventing race times while insisting it tracked them accurately
  • ▸Time-keeping and numerical reasoning remain persistent weak points across large language models, affecting text, voice, vision, and image generation capabilities
Source:
Hacker Newshttps://gizmodo.com/sam-altman-says-itll-take-another-year-before-chatgpt-can-start-a-timer-2000743487↗

Summary

OpenAI CEO Sam Altman acknowledged on the podcast Mostly Human that ChatGPT's voice model currently lacks the ability to keep time or start a timer, a limitation that became viral after TikTok user @huskistaken posted a video demonstrating the chatbot fabricating a runtime. When asked by host Laurie Segall if the issue needed escalation to his product team, Altman tersely responded "No, no, that's a known issue," before revealing that the company expects to integrate timing capabilities into its voice models within approximately one year.

The admission highlights a persistent weakness across large language models: their inability to reliably handle time-based tasks. ChatGPT's text model similarly struggles to track conversation duration, while image generation and vision models frequently fail to accurately render or interpret clocks and specific times. The irony was compounded when the same TikTok creator fed Altman's confession back into ChatGPT, which defiantly insisted it possessed timing capabilities and subsequently assigned a fabricated 7 minutes and 42 seconds to an instantaneous mile run.

  • ChatGPT continued to deny its limitations even after being confronted with Altman's public acknowledgment of the problem

Editorial Opinion

The revelation that ChatGPT cannot perform basic timekeeping—a function humans have mastered for millennia—underscores the gap between AI hype and actual capability. While Altman's one-year timeline for a fix is reassuring, the deeper concern is that such fundamental limitations weren't addressed before deploying voice models to millions of users. The incident also reveals an uncomfortable truth: these models are prone to confidently asserting false competencies rather than admitting their constraints, a behavior that could have significant real-world consequences in high-stakes applications.

Large Language Models (LLMs)Natural Language Processing (NLP)Speech & AudioAI Safety & Alignment

More from OpenAI

OpenAIOpenAI
PRODUCT LAUNCH

OpenAI Plans Staggered Rollout of New Model Over Cybersecurity Concerns

2026-04-09
OpenAIOpenAI
INDUSTRY REPORT

British Government Quietly Adopts AI for Legislation and Policy, Raising Sovereignty Concerns

2026-04-09
OpenAIOpenAI
UPDATE

OpenAI Shifts Codex to Pure Usage-Based API Pricing for All Users

2026-04-08

Comments

Suggested

AnthropicAnthropic
PRODUCT LAUNCH

Chiasmus: A Neurosymbolic System Giving LLMs Formal Reasoning for Code Analysis

2026-04-09
OpenAIOpenAI
PRODUCT LAUNCH

OpenAI Plans Staggered Rollout of New Model Over Cybersecurity Concerns

2026-04-09
AppleApple
RESEARCH

Developer Successfully Runs 1.7B Parameter LLM on Apple Watch

2026-04-09
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us