Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: CSET Georgetown

A CSET policy analysis relevant to debates about how to regulate frontier AI, particularly for those evaluating analogies between AI governance and arms control regimes like nuclear non-proliferation.

Metadata

Importance: 62/100organizational reportanalysis

Summary

This CSET Georgetown article argues that applying nuclear non-proliferation frameworks to AI governance is fundamentally misguided due to key differences between the technologies. It outlines why AI's diffuse development, dual-use nature, and rapid commoditization make proliferation-style controls impractical and potentially counterproductive. The piece proposes alternative governance approaches better suited to AI's unique characteristics.

Key Points

  • Nuclear non-proliferation relies on controlling scarce physical materials and specialized facilities, conditions that do not map onto AI development.
  • AI capabilities are increasingly commoditized and widely distributed, making supply-side restriction strategies far less effective than with nuclear tech.
  • Applying proliferation frameworks risks creating false security while stifling beneficial AI research and international cooperation.
  • Alternative governance models should focus on norms, standards, transparency, and use-case regulation rather than technology denial.
  • Policymakers need AI-specific frameworks that account for the technology's dual-use nature and rapid pace of diffusion.

Cached Content Preview

HTTP 200Fetched Mar 15, 20265 KB
Nuclear Non-Proliferation Is the Wrong Framework for AI Governance | Center for Security and Emerging Technology 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
 

 
 
 

 
 

 
 
 
 
 
 
 
 
 Skip to main content 
 
 
 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 In The News 

 Nuclear Non-Proliferation Is the Wrong Framework for AI Governance

 
 
 AI Frontiers

 
 June 27, 2025 
 
 CSET’s Lauren A. Kahn and CFR's Michael C. Horowitz shared their expert analysis in an op-ed published by AI Frontiers. In their piece, they examine the growing calls to regulate artificial intelligence in ways similar to nuclear technology. 

 Read the Op-Ed 
 
 

 

 
 
 
 
 CSET’s Lauren A. Kahn and CFR’s Michael C. Horowitz shared their expert analysis in an op-ed published by AI Frontiers . In their piece, they examine the growing calls to regulate artificial intelligence in ways similar to nuclear technology. They highlight recent comments by Demis Hassabis, CEO of Google DeepMind, who suggested creating an international AI oversight body modeled after the International Atomic Energy Agency.

 The authors emphasized the limits of the nuclear comparison, noting, “No analogy is perfect, but especially as a general-purpose technology, AI differs so fundamentally from nuclear technology that basing AI policy around the nuclear analogy is conceptually flawed and risks inflating expectations about the international community’s ability to control model proliferation.”

 No analogy is perfect, but especially as a general-purpose technology, AI differs so fundamentally from nuclear technology that basing AI policy around the nuclear analogy is conceptually flawed and risks inflating expectations about the international community’s ability to control model proliferation. Lauren A. Kahn and Michael C. Horowitz 

 To read the full article, visit  AI Frontiers .

 

 
 
 
 
 
 Authors

 Lauren Kahn 
 Michael C. Horowitz

 

 
 
 
 Original Publisher

 AI Frontiers

 Originally Published

 Jun 27, 2025

 
 

 
 
 
 
 Topics

 
 Assessment
 

 
 
 

 

 Related Content

 
 
 Previous 
 Next 
 
 
 
 
 
 
 Reports 
 AI Safety and Automation Bias

 
 November 2024 
 
 Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design,… Read More 

 
 
 
 
 
 Reports 
 Governing AI with Existing Authorities

 
 July 2024 
 
 A core question in policy debates around artificial intelligence is whether federal agencies can use their existing authorities to govern AI or if the government needs new legal powers to manage the technology. The authors… Read More 

 
 
 
 
 
 The Lawfare Podcast: Governing the Use of Autonomous Weapons and AI in Warfare with Lauren Kahn

 
 March 2024 
 
 In a recent episode of The Lawfare Podcast, which explore

... (truncated, 5 KB total)
Resource ID: 076fea2a9efa2206 | Stable ID: MGM3NWRiNj