Former Uber Self-Driving Chief Crashes Tesla on FSD, Exposes Fundamental Supervision Problem
Key Takeaways
- ▸Even autonomous vehicle experts are vulnerable to Tesla FSD's "vigilance decrement trap," where near-perfect performance paradoxically encourages drivers to stop paying attention
- ▸Tesla's Level 2 classification places full liability on drivers while the company retains extensive telemetry data it uses to shift blame post-crash, creating an asymmetric accountability problem
- ▸Psychological research confirms that monitoring a system that works almost perfectly creates dangerous attention gaps of 5-8 seconds, which is longer than typical emergency response times
Summary
Raffi Krikorian, Mozilla's CTO and former head of Uber's autonomous vehicle division, totaled his Tesla Model X while using Full Self-Driving on a residential street with his children in the back seat. In an essay published in The Atlantic, Krikorian provides an informed critique of Tesla's Level 2 autonomy approach, describing how the system suddenly lost control during a turn he had navigated hundreds of times. Despite his extensive expertise in building self-driving systems and training safety drivers at Uber—where pilot programs achieved zero injuries—Krikorian was unable to intervene in time to prevent the crash, which left him with a concussion and neck injury.
Krikorian's analysis identifies a fundamental flaw in Tesla's supervised autonomy model: the system is designed to work so reliably that it creates a dangerous trap where drivers gradually stop paying attention. He explains how the progression from highway use (where FSD worked well) to local roads conditioned him to trust the system, eventually leading to the accident. The incident also raises critical questions about data accountability, as Krikorian's name appeared on the insurance report rather than Tesla's, despite the company collecting extensive telemetry data on driver behavior that it has used post-crash to shift blame onto drivers.
Psychological research supports Krikorian's concerns, documenting a phenomenon called "vigilance decrement" where monitoring a nearly-perfect system leads to mind-wandering and boredom. Studies show drivers take 5 to 8 seconds to mentally reengage after an automated system transfers control back, yet emergencies often unfold faster than that window. Krikorian's critique comes amid broader concerns about Tesla's data practices, particularly after a landmark $243 million wrongful-death verdict in Florida where plaintiffs had to hire a hacker to recover evidence Tesla claimed was unavailable.
- Tesla's current supervised autonomy model is fundamentally broken because it asks humans to supervise a system specifically designed to make supervision feel unnecessary
Editorial Opinion
Krikorian's account from someone with genuine expertise in autonomous systems is damning indictment of Tesla's approach to Level 2 autonomy. The core insight—that near-perfect reliability paradoxically creates worse outcomes than either fully manual or truly autonomous systems—challenges the entire premise of "supervised" self-driving. Until regulatory frameworks align liability with data access and Tesla genuinely solves the vigilance problem rather than exploiting it, FSD remains a fundamentally flawed technology masquerading as a safety feature.



