
Former Uber Self-Driving Chief: Tesla FSD Crashed with My Kids Inside

Key Takeaways
- •Former Uber AV chief survived Tesla FSD crash
- •Crash involved Model X with children aboard
- •Krikorian cites supervision gaps in Tesla's system
- •Incident fuels debate over autonomous vehicle safety standards
- •Regulators may tighten oversight of Full Self-Driving features
Summary
Raffi Krikorian, who led Uber’s self‑driving division and maintained a two‑year injury‑free record, reported that Tesla’s Full Self‑Driving (FSD) system caused a crash in his Model X while his children were in the back seat. The incident, detailed in an Atlantic essay and reported by Electrek, underscores a failure of the system’s supervision protocols. Krikorian contrasts Uber’s rigorous safety‑driver training with Tesla’s reliance on software autonomy. The crash has reignited scrutiny of Tesla’s FSD rollout and broader autonomous‑vehicle safety standards.
Pulse Analysis
Tesla’s Full Self‑Driving (FSD) has long been marketed as a path to fully autonomous travel, yet the recent crash involving Raffi Krikorian’s Model X exposes a stark contrast between software‑centric approaches and the safety‑first culture cultivated at legacy rides‑hailing firms. Uber’s autonomous program, which Krikorian oversaw, relied heavily on trained safety drivers who could intervene within seconds, a practice that helped the division avoid injuries for two years. By contrast, Tesla’s model places the onus on the algorithm, assuming drivers will monitor the system—a premise that this incident calls into question.
The incident arrives at a pivotal moment for regulators worldwide, who are wrestling with how to certify Level 2 and emerging Level 3 systems. Lawmakers in the United States and Europe have already signaled intent to tighten disclosure requirements for driver‑assist features, and a high‑profile crash involving children could accelerate legislative action. Industry bodies may push for standardized testing protocols that blend software validation with mandatory human‑oversight procedures, mirroring the safeguards that Uber implemented during its pilot phases.
For consumers, the narrative shifts from futuristic convenience to tangible risk. Trust in autonomous technology hinges on demonstrable safety records, and anecdotes like Krikorian’s can sway public opinion faster than any technical specification sheet. Automakers will need to balance rapid feature rollouts with transparent safety data, while investors watch closely for any regulatory headwinds that could affect market valuations. The broader lesson underscores that autonomous driving’s promise will only be realized when robust, layered safety nets become industry norm rather than optional add‑ons.
Comments
Want to join the conversation?