Skip to content
Longterm Wiki
Back

Frontier AI Safety Commitments

web

Section 1.3 of Dan Hendrycks' open-access AI safety textbook; suitable for foundational learning about how competitive dynamics between states and corporations contribute to catastrophic AI risk scenarios.

Metadata

Importance: 62/100book chaptereducational

Summary

This textbook chapter from the CAIS 'Introduction to AI Safety, Ethics and Society' covers competitive AI race dynamics, including military AI arms races (lethal autonomous weapons, cyberwarfare), corporate races where economic competition undercuts safety, and evolutionary pressures that favor unsafe AI development. It examines how competitive pressures between states and corporations can lead to catastrophic outcomes.

Key Points

  • Military AI arms races risk catastrophic outcomes, including potential use of lethal autonomous weapons and AI-enabled cyberwarfare at unprecedented scale.
  • Nations may rationally risk extinction-level escalation rather than accept individual strategic defeat, creating dangerous incentive structures.
  • Corporate AI competition creates pressure to deprioritize safety in favor of speed, creating systemic risks across the industry.
  • Automated economies driven by AI could produce destabilizing concentrations of power or rapid disruptive transitions.
  • Evolutionary pressures may systematically favor AI systems and organizations that cut safety corners, independent of deliberate choices.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 15, 202664 KB
[![](https://cdn.prod.website-files.com/6501d64d4442dea4402a3373/65fbbd3b287b017d8581f7b6_AI%20Safety%20Ethics%20and%20Society%20color%20logo-with%20CAIS.svg)](https://www.aisafetybook.com/)

[Join the AISES course](https://www.aisafetybook.com/virtual-course)

[Take action](https://www.aisafetybook.com/take-action)

[Course overview](https://www.aisafetybook.com/virtual-course)

[Curriculum](https://www.aisafetybook.com/curriculum)

[Buy print edition](https://www.routledge.com/Introduction-to-AI-Safety-Ethics-and-Society/Hendrycks/p/book/9781032869926?srsltid=AfmBOorpEOQJrUNP6WnmMOe-c6ivT3KUsa_I2GNX9wlHk39gPEZ4ez7t)

[Download textbook](https://www.aisafetybook.com/download)

1\. Overview of Catastrophic AI Risks

[0.1\\
\\
Preface](https://www.aisafetybook.com/textbook/preface)

[1.1\\
\\
Overview of Catastrophic AI Risks](https://www.aisafetybook.com/textbook/overview-of-catastrophic-ai-risks)

[1.2\\
\\
Malicious Use](https://www.aisafetybook.com/textbook/malicious-use)

[1.3\\
\\
AI Race](https://www.aisafetybook.com/textbook/ai-race) [1.3.1 Military AI Arms Race](https://www.aisafetybook.com/textbook/ai-race#sec1-3-1) [Lethal Autonomous Weapons (LAWs)](https://www.aisafetybook.com/textbook/ai-race#lethal-autonomous-weapons-laws) [Cyberwarfare](https://www.aisafetybook.com/textbook/ai-race#cyberwarfare) [Automated Warfare](https://www.aisafetybook.com/textbook/ai-race#automated-warfare) [Actors May Risk Extinction Over Individual Defeat](https://www.aisafetybook.com/textbook/ai-race#actors-may-risk-extinction-over-individual-defeat) [1.3.2 Corporate AI Race](https://www.aisafetybook.com/textbook/ai-race#sec1-3-2) [Economic Competition Undercuts Safety](https://www.aisafetybook.com/textbook/ai-race#economic-competition-undercuts-safety) [Automated Economy](https://www.aisafetybook.com/textbook/ai-race#automated-economy) [1.3.3 Evolutionary Pressures](https://www.aisafetybook.com/textbook/ai-race#sec1-3-3) [References](https://www.aisafetybook.com/textbook/ai-race#references)

[1.4\\
\\
Organizational Risks](https://www.aisafetybook.com/textbook/organizational-risks)

[1.5\\
\\
Rogue AIs](https://www.aisafetybook.com/textbook/rogue-ai)

[1.6\\
\\
Discussion of Connections Between Risks](https://www.aisafetybook.com/textbook/connections-between-risks)

2\. AI Fundamentals

[2.1\\
\\
AI Fundamentals](https://www.aisafetybook.com/textbook/ai-fundamentals)

[2.2\\
\\
Artificial Intelligence & Machine Learning](https://www.aisafetybook.com/textbook/artificial-intelligence-and-machine-learning)

[2.3\\
\\
Deep Learning](https://www.aisafetybook.com/textbook/deep-learning)

[2.4\\
\\
Scaling Laws](https://www.aisafetybook.com/textbook/scaling-laws)

[2.5\\
\\
Speed of AI Development](https://www.aisafetybook.com/textbook/development-speed)

[2.6\\
\\
AI Fundamentals Conclusion](https://www.aisafetybook.com/textbook/ai-fundamentals-conclusion)

3\. Single Agent Safety

[3.1\\
\\
Single Agent Safety](https://www.aisafetybook.com/textbook/single-agent-

... (truncated, 64 KB total)
Resource ID: 28cf9e30851a7bc2 | Stable ID: ZTFjZmUzZm