Skip to content
Longterm Wiki
Back

Aviation industry shows

web

Published by UC Berkeley's Center for Long-Term Cybersecurity (CLTC), this report is useful for researchers and policymakers exploring analogies between aviation safety regulation and AI governance frameworks.

Metadata

Importance: 52/100organizational reportanalysis

Summary

This CLTC Berkeley report examines how the aviation industry's rigorous safety culture, certification processes, and regulatory frameworks can inform AI safety practices. It draws parallels between aviation's evolution as a safety-critical domain and the challenges facing AI deployment, offering concrete lessons for developing robust AI safety standards.

Key Points

  • Aviation developed layered safety systems over decades through incident analysis, redundancy, and continuous improvement—a model applicable to AI safety engineering.
  • Regulatory certification processes in aviation (e.g., FAA standards) provide a template for how AI systems in high-stakes domains might be evaluated and approved.
  • Safety culture in aviation emphasizes human-machine collaboration, error reporting, and systemic thinking rather than blame—lessons transferable to AI governance.
  • The report highlights the importance of industry-wide standards and independent oversight bodies for managing safety-critical AI systems.
  • Key differences between aviation and AI are also identified, including AI's faster iteration cycles and the opacity of ML models compared to traditional software.

Cited by 1 page

PageTypeQuality
Pause AdvocacyApproach91.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20269 KB
[Skip to content](https://cltc.berkeley.edu/publication/new-report-the-flight-to-safety-critical-ai-lessons-in-ai-safety-from-the-aviation-industry/#primary)

Search Site

Search

Search Site

Search

![](https://cltc.berkeley.edu/wp-content/uploads/2020/08/Cockpit.png)

White Paper /
August 2020

# The Flight to Safety-Critical AI: Lessons in AI Safety from the Aviation Industry

By

[![](https://cltc.berkeley.edu/wp-content/uploads/2022/09/will-hunt-150x150.jpg)Will Hunt](https://cltc.berkeley.edu/people/will-hunt/)

[![](https://cltc.berkeley.edu/wp-content/uploads/2020/08/AI_AVIATION_COVER-232x300.png)(opens in a new tab)](https://cltc.berkeley.edu/wp-content/uploads/2020/08/Flight-to-Safety-Critical-AI.pdf) [Download the report](https://cltc.berkeley.edu/wp-content/uploads/2020/08/Flight-to-Safety-Critical-AI.pdf)

The Center for Long-Term Cybersecurity has issued a new report that assesses how competitive pressures have affected the speed and character of artificial intelligence (AI) research and development in an industry with a history of extensive automation and impressive safety performance: aviation. [_The Flight to Safety-Critical AI: Lessons in AI Safety from the Aviation Industry_(opens in a new tab)](https://cltc.berkeley.edu/wp-content/uploads/2020/08/Flight-to-Safety-Critical-AI.pdf), authored by **Will Hunt**, a graduate researcher at the [AI Security Initiative(opens in a new tab)](https://cltc.berkeley.edu/ai-security-initiative/), draws on interviews with a wide range of experts from across the industry and finds limited evidence of an AI “race to the bottom” and some evidence of a (long, slow) race to the top.

Rapid progress in the field of AI over the past decade has generated both enthusiasm and rising concern. The most sophisticated AI models are powerful — but also opaque, unpredictable, and accident-prone. Policymakers and AI researchers alike fear the prospect of a “race to the bottom” on AI safety, in which firms or states compromise on safety standards while trying to innovate faster than the competition.

But current discussions of the existing or future race to the bottom in AI elide two important observations. First, different industries and regulatory domains experience a wide range of competitive dynamics — including races to the top and middle — and claims about races to the bottom often lack empirical support. Second, AI is a general-purpose technology with applications across every industry. As such, we should expect significant variation in competitive dynamics and consequences for AI from one industry to the next.

This paper analyzes the nature of competitive dynamics surrounding AI safety on an issue-by-issue and industry-by-industry basis. Rather than discuss the risk of “AI races” in the abstract, this research focuses on how the issue of AI safety has been navigated by a particular industry: commercial aviation, an industry where safety is critically important and automation is common.

Do the competi

... (truncated, 9 KB total)
Resource ID: f506ac6ce794b21a | Stable ID: NWJmNGY2Ym