Skip to content
Longterm Wiki
Back

Algorithmic Stability: How AI Could Shape the Future of Deterrence

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: CSIS

Published by CSIS (Center for Strategic and International Studies), this is a policy-focused analysis relevant to AI safety researchers concerned with military AI, autonomous weapons, and how AI deployment in national security contexts creates systemic risks including accidental escalation and reduced human control.

Metadata

Importance: 55/100organizational reportanalysis

Summary

This CSIS analysis examines how AI systems could transform nuclear and conventional deterrence stability, exploring both stabilizing and destabilizing effects of algorithmic decision-making in high-stakes security contexts. It considers how AI integration into military command, control, and intelligence systems may alter crisis dynamics and escalation risks. The piece offers policy recommendations for maintaining strategic stability in an era of increasing AI adoption by major powers.

Key Points

  • AI could undermine deterrence stability by compressing decision timelines, increasing false-alarm risks, and enabling autonomous escalation in crisis scenarios.
  • Algorithmic systems in military contexts may create new vulnerabilities, including adversarial manipulation and opaque decision-making that reduces human oversight.
  • AI-enhanced ISR (intelligence, surveillance, reconnaissance) could shift strategic balances by improving first-strike capabilities or undermining second-strike survivability.
  • International norms and arms control frameworks may need updating to address AI-enabled military capabilities and prevent miscalculation between nuclear powers.
  • Human-machine teaming and maintaining meaningful human control over critical decisions are proposed as key stabilizing principles.

Cached Content Preview

HTTP 200Fetched Mar 15, 202645 KB
Algorithmic Stability: How AI Could Shape the Future of Deterrence 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 

 

 
 
 Skip to main content
 
 
 

 
 

 
 
 
 
 
 
 

 

 
 
 
 
 

 
 
 
 
 
 
 
 

 
 
 

 
 
 
 
 

 

 
 
 
 

 
 
 
 
 
 Algorithmic Stability: How AI Could Shape the Future of Deterrence 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 

 
 Photo: wowinside/AdobeStock

 
 
 

 
 
 
 
 

 

 
 

 

 
 

 
 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 

 
 
 

 

 
 
 Report 
 by 
 
 Benjamin Jensen , 
 
 Yasir Atalan , 
 and 
 Jose M. Macias III 
 
 
 

 Published June 10, 2024

 
 

 

 
 
 
 
 

 
 
 
 
 
 
 
 
 

 
 Available Downloads
 

 
 
 
 Download the Full Report 
 6826kb 
 

 
 

 

 

 
 
 
 
 
 
 
 Remote Visualization 
 
 

 

 
 

 
 
 
 
 

 
 
 
 
 In the future…

 States will integrate artificial intelligence and machine learning (AI/ML) into their national security enterprises to gain decision advantages over their rivals. The question will not be if a web of algorithms extends across the military, intelligence community, and foreign policy decisionmaking institutions, but how lines of code interact with the human clash of wills at the heart of strategy.
 New technology will change the character but not the nature of statecraft and strategy. States will still combine diplomacy, economic coercion, and influence campaigns with threats of military force to signal rivals and reassure allies. Human decisionmaking, while augmented by algorithms, remains central to strategy formation and crisis management.
 Information about AI/ML capabilities will influence how states manage escalation. Escalation risks will continue to emerge from how warfighting changes the balance of information available to update models and support human decisionmaking. Intelligence gaps on adversary algorithms increase the likelihood of escalation but only once states have crossed the Rubicon and fight beneath the nuclear threshold. 
 
 Introduction

 How will the adoption of AI/ML across a state’s national security enterprise affect crisis decisionmaking? For example, what would the Cuban Missile Crisis look like at machine speed?

 Beyond current policy debates, congressional testimony, new strategies, and a drive to identify, test, and evaluate standards, there is a fundamental question of how computer algorithms will shape crisis interactions between nuclear powers.[1] Further, will refined AI/ML models pull people back from the brink or push them over the edge during crises that are as much about fear and emotion as they are rational decisionmaking? How will humans and machines interact during a crisis between nuclear powers? 

 
 

 

 
 
 
 
 
 
 
 
 
 
 This edition of On Future War uses a series of wargames as an experiment to analyze how players with 10 or more years of national security experience approach crisis decisionmaking given variable l

... (truncated, 45 KB total)
Resource ID: 065c88f0533ab2b3 | Stable ID: Y2U2YjBlMD