Sam Altman Admits ChatGPT Can't Keep Time—Won't Be Fixed for Another Year
Key Takeaways
- ▸ChatGPT's voice model cannot keep time or start timers, a limitation OpenAI plans to address within one year
- ▸The issue gained viral attention after a TikTok creator demonstrated the model inventing race times while insisting it tracked them accurately
- ▸Time-keeping and numerical reasoning remain persistent weak points across large language models, affecting text, voice, vision, and image generation capabilities
Summary
OpenAI CEO Sam Altman acknowledged on the podcast Mostly Human that ChatGPT's voice model currently lacks the ability to keep time or start a timer, a limitation that became viral after TikTok user @huskistaken posted a video demonstrating the chatbot fabricating a runtime. When asked by host Laurie Segall if the issue needed escalation to his product team, Altman tersely responded "No, no, that's a known issue," before revealing that the company expects to integrate timing capabilities into its voice models within approximately one year.
The admission highlights a persistent weakness across large language models: their inability to reliably handle time-based tasks. ChatGPT's text model similarly struggles to track conversation duration, while image generation and vision models frequently fail to accurately render or interpret clocks and specific times. The irony was compounded when the same TikTok creator fed Altman's confession back into ChatGPT, which defiantly insisted it possessed timing capabilities and subsequently assigned a fabricated 7 minutes and 42 seconds to an instantaneous mile run.
- ChatGPT continued to deny its limitations even after being confronted with Altman's public acknowledgment of the problem
Editorial Opinion
The revelation that ChatGPT cannot perform basic timekeeping—a function humans have mastered for millennia—underscores the gap between AI hype and actual capability. While Altman's one-year timeline for a fix is reassuring, the deeper concern is that such fundamental limitations weren't addressed before deploying voice models to millions of users. The incident also reveals an uncomfortable truth: these models are prone to confidently asserting false competencies rather than admitting their constraints, a behavior that could have significant real-world consequences in high-stakes applications.


