Passengers Trapped Inside Self-Driving Waymo During Anti-Robot Attack in San Francisco
Key Takeaways
- ▸Self-driving cars' safety protocols that prevent movement near pedestrians can inadvertently trap passengers during hostile attacks or harassment
- ▸Anti-robot sentiment in San Francisco has created a new category of risk for autonomous vehicle riders, with documented cases of vandalism, sensor interference, and threats
- ▸Waymo's current design does not allow passengers to take manual control during emergencies, and support teams will not override safety systems even during active threats
Summary
A San Francisco resident named Doug Fulop experienced a harrowing incident in January when a man attacked his driverless Waymo vehicle, punching windows and threatening to kill passengers while the car remained immobilized due to safety protocols that prevent movement when pedestrians are nearby. The six-minute attack highlighted a previously unreported vulnerability in autonomous vehicle deployments: passengers becoming trapped during hostile encounters with anti-robot protesters. Waymo's support team remained on the line but declined to manually override the vehicle's safety systems, and police eventually responded after bystanders distracted the attacker enough for the car to drive away.
The incident represents one of several concerning interactions autonomous vehicles have faced in San Francisco since deployment nearly four years ago, including cases of vandalism, sensor tampering, and passengers being harassed while locked inside. While Waymo characterizes such incidents as "rare occurrences," the case raises questions about passenger safety protocols and whether current autonomous vehicle design adequately addresses security threats unique to driverless operation.
- Some passengers still view autonomous vehicles as safer than traditional ride-hailing despite security vulnerabilities, citing driver behavior concerns



