Skip to content
Longterm Wiki
Back

research on autonomous weapons

paper

Authors

Riley Simmons-Edler·Ryan Badman·Shayne Longpre·Kanaka Rajan

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A policy-oriented paper relevant to AI safety researchers concerned with near-term geopolitical risks from military AI, distinct from long-term AGI alignment concerns; useful for governance and arms control discussions.

Paper Details

Citations
20
0 influential
Year
2024

Metadata

Importance: 62/100arxiv preprintanalysis

Abstract

The recent embrace of machine learning (ML) in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research. This topic has received comparatively little attention of late compared to risks stemming from superintelligent artificial general intelligence (AGI), but requires fewer assumptions about the course of technological development and is thus a nearer-future issue. ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war. In the case of peer adversaries, this increases the likelihood of "low intensity" conflicts which risk escalation to broader warfare. In the case of non-peer adversaries, it reduces the domestic blowback to wars of aggression. This effect can occur regardless of other ethical issues around the use of military AI such as the risk of civilian casualties, and does not require any superhuman AI capabilities. Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research. Our goal in this paper is to raise awareness among the public and ML researchers on the near-future risks posed by full or near-full autonomy in military technology, and we provide regulatory suggestions to mitigate these risks. We call upon AI policy experts and the defense AI community in particular to embrace transparency and caution in their development and deployment of AWS to avoid the negative effects on global stability and AI research that we highlight here.

Summary

This paper argues that ML-enabled autonomous weapons systems (AWS) pose near-term risks to geopolitical stability by reducing the human and political costs of warfare, increasing the likelihood of low-intensity conflicts and enabling wars of aggression. The authors warn these destabilizing effects are independent of AGI concerns or civilian casualty ethics, and that AWS military value may trigger arms races and harmful restrictions on AI research. They call for transparency, caution, and regulatory action from the defense AI community and policymakers.

Key Points

  • AWS reduce the upfront human cost of warfare, lowering political barriers to offensive war against both peer and non-peer adversaries.
  • Destabilizing effects occur independently of other ethical concerns (e.g., civilian casualties) and do not require superhuman AI capabilities.
  • The military value of AWS risks triggering an AI-powered arms race and misguided national security restrictions on civilian AI research.
  • Near-term risks from AWS deserve more attention than long-term AGI risks, as they require fewer assumptions about future technological development.
  • Authors recommend regulatory measures including transparency and caution in AWS development and deployment to preserve global stability and research freedom.

Cited by 1 page

PageTypeQuality
AI Flash DynamicsRisk64.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
[License: CC BY 4.0](https://info.arxiv.org/help/license/index.html#licenses-available)

arXiv:2405.01859v1 \[cs.CY\] 03 May 2024

# AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research

Report issue for preceding element

Riley Simmons-Edler
Ryan Badman
Shayne Longpre
Kanaka Rajan

Report issue for preceding element

###### Abstract

Report issue for preceding element

The recent embrace of machine learning (ML) in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research.
This topic has received comparatively little attention of late compared to risks stemming from superintelligent artificial general intelligence (AGI), but requires fewer assumptions about the course of technological development and is thus a nearer-future issue.
ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war.
In the case of peer adversaries, this increases the likelihood of “low intensity” conflicts which risk escalation to broader warfare.
In the case of non-peer adversaries, it reduces the domestic blowback to wars of aggression.
This effect can occur regardless of other ethical issues around the use of military AI such as the risk of civilian casualties,
and does not require any superhuman AI capabilities.
Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research.
Our goal in this paper is to raise awareness among the public and ML researchers on the
near-future risks posed by
full or near-full autonomy in military technology, and we provide regulatory suggestions to mitigate these risks.
We call upon AI policy experts and the defense AI community in particular to embrace transparency and caution in their development and deployment of AWS to avoid the negative effects on global stability and AI research that we highlight here.

Report issue for preceding element

Machine Learning, ICML, AI Policy, AI and Society, Military AI, AI Safety

## 1 Introduction

Report issue for preceding element

With the rise of powerful generative AI models such as GPT-4 and Stable Diffusion, and continued progress in fields such as robotics and reinforcement learning, concerns have grown among both experts and the public about giving AI too much power.
Academic concerns have thus far centered on threats in domains such as cybersecurity, biological weapons, disinformation, fraud, and hypothetical rogue artificial general intelligence (AGI) (Future of Life Institute, [2023](https://arxiv.org/html/2405.01859v1#bib.bib48 ""); Bengio, [2023](https://arxiv.org/html/2405.01859v1#bib.bib10 "")).
Despite these general concerns, there has been relatively little attention given to specific recent developments from military and defense-industry groups, which have

... (truncated, 98 KB total)
Resource ID: d85964db18a590f3 | Stable ID: YzUxODBmMT