Back
fatal 2018 Uber self-driving car accident in Arizona
webincidentdatabase.ai·incidentdatabase.ai/cite/20/
An AI Incident Database entry documenting the first fatal autonomous vehicle pedestrian accident; critical case study for AI deployment safety, regulatory responses, and the limits of real-world robustness in ML systems.
Metadata
Importance: 72/100wiki pageprimary source
Summary
Documents the March 2018 fatal collision in Tempe, Arizona, where an Uber autonomous test vehicle struck and killed pedestrian Elaine Herzberg. This was the first recorded pedestrian fatality involving a self-driving car, and the incident revealed critical failures in sensor fusion, emergency braking systems, and human safety oversight. It became a landmark case for AI safety, autonomous vehicle regulation, and deployment accountability.
Key Points
- •First pedestrian death caused by an autonomous vehicle; Uber's self-driving system failed to correctly classify the pedestrian and did not activate emergency braking.
- •The safety driver was distracted at the time of impact, highlighting the dangers of over-reliance on human backup oversight in semi-autonomous systems.
- •Post-incident investigations revealed the system had detected the pedestrian but repeatedly misclassified her, demonstrating robustness and generalization failures.
- •Led to significant regulatory scrutiny, temporary suspension of Uber's AV program, and broader industry-wide safety protocol reviews.
- •Illustrates real-world consequences of deploying AI systems before adequate safety validation, serving as a key case study in responsible AI deployment.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Distributional Shift | Risk | 91.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202634 KB
# Incident 20: A Collection of Tesla Autopilot-Involved Crashes
Share to TwitterShare to LinkedInShare by emailShare to Facebook
[Previous Incident](https://incidentdatabase.ai/cite/19 "Previous Incident")[Random Incident](https://incidentdatabase.ai/random/ "Random Incident")[Next Incident](https://incidentdatabase.ai/cite/21 "Next Incident")
**Description**: Multiple unrelated car accidents result in varying levels of harm have been occurred while a Tesla's autopilot was in use.
#### Tools
Notify Me of UpdatesNotify Me of Updates
[New ReportNew Report](https://incidentdatabase.ai/apps/submit?incident_ids=20&date_downloaded=2026-03-19) [New ResponseNew Response](https://incidentdatabase.ai/apps/submit?tags=response&incident_ids=20) [DiscoverDiscover](https://incidentdatabase.ai/apps/discover?incident_id=20) Citation InfoCitation Info [View HistoryView History](https://incidentdatabase.ai/incidents/history?incident_id=20)
#### Entities
[View all entities](https://incidentdatabase.ai/entities/)
Alleged: [Tesla](https://incidentdatabase.ai/entities/tesla/) developed and deployed an AI system, which harmed [Motorists](https://incidentdatabase.ai/entities/motorists/).
#### Incident Stats
Incident ID
20
Report Count
22
Incident Date
2016-06-30
Editors
Sean McGregor
Applied Taxonomies
[CSETv0](https://incidentdatabase.ai/cite/20/#CSETv0-classifications), [CSETv1](https://incidentdatabase.ai/cite/20/#CSETv1-classifications), [GMF](https://incidentdatabase.ai/cite/20/#GMF-classifications), [MIT](https://incidentdatabase.ai/cite/20/#MIT-classifications)
#### CSETv0 Taxonomy Classifications
[Taxonomy Details](https://incidentdatabase.ai/taxonomies/csetv0)
Problem Nature
Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
Specification, Robustness
Physical System
Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
Vehicle/mobile robot
Level of Autonomy
The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
Medium
Nature of End User
"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
Amateur
Public Sector Deploym
... (truncated, 34 KB total)Resource ID:
e3ad4d7f973693b0 | Stable ID: OTNjYjJmZD