RAND Corporation research
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: RAND Corporation
A 2020 RAND commentary relevant to AI safety governance discussions around lethal autonomous weapons systems (LAWS), escalation dynamics, and the need for international coordination on AWS deployment norms.
Metadata
Summary
RAND analyst Burgess Laird examines how AI-enabled autonomous weapons systems (AWS) could destabilize crises and escalate conflicts in U.S.-Russia confrontations. The piece argues that while AWS offer operational advantages, they introduce significant risks to crisis stability that policymakers must weigh carefully. It calls for historically-informed skepticism about revolutionary claims while acknowledging the real proliferation of AWS development efforts.
Key Points
- •AWS development is accelerating globally, led by the U.S. and China, with Russia lagging despite Putin's AI rhetoric, all driven by desire for conventional deterrence advantages.
- •Autonomous systems that select and engage targets without human intervention pose novel escalation risks distinct from previous military technologies.
- •The piece urges policymakers to evaluate whether operational benefits of AWS outweigh the risks to crisis stability before deployment.
- •Historical skepticism about revolutionary military technology is warranted, with nuclear weapons as the rare genuine exception to incremental change.
- •U.S. military views AI and robotics as key to offsetting Chinese and Russian anti-access/area-denial capabilities and restoring conventional deterrence.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Flash Dynamics | Risk | 64.0 |
Cached Content Preview
- [Skip to page content](https://www.rand.org/pubs/commentary/2020/06/the-risks-of-autonomous-weapons-systems-for-crisis.html#page-content)
# The Risks of Autonomous Weapons Systems for Crisis Stability and Conflict Escalation in Future U.S.-Russia Confrontations
Commentary
Jun 3, 2020

An Air Force MQ-9 Reaper unmanned aircraft awaits maintenance at Creech Air Force Base, Nevada, December 8, 2016
Photo by Senior Airman Christian Clausen/U.S. Air Force
By [Burgess Laird](https://www.rand.org/pubs/authors/l/laird_burgess.html)
This commentary was originally published by _[Russia Matters](https://russiamatters.org/analysis/risks-autonomous-weapons-systems-crisis-stability-and-conflict-escalation-future-us-russia)_ on June 2, 2020.
What advances in artificial intelligence (AI) will mean for the future battlefield remains opaque. Mainline discourse is overly influenced by exaggerated forecasts such as that reflected in Russian President Vladimir Putin's often repeated 2017 [declaration](https://www.cnn.com/2017/09/01/world/putin-artificial-intelligence-will-rule-world/index.html) that “whoever becomes the leader” in the sphere of artificial intelligence “will become the ruler of the world” and a [broadly similar claim](https://www.themoscowtimes.com/2020/05/18/russia-is-a-distinct-civilization-putin-says-a70295) he advanced as recently as May 18 when he insisted that without artificial intelligence, hypersonic weapons and other new technologies, “it would be impossible to secure the future of our \[Russia's\] civilization.” Whatever security benefits and military advantages they may bring, as I argue in this piece, the development and posturing of AI-enabled autonomous weapons systems (AWS) is likely to raise significant risks for crisis instability and conflict escalation in future confrontations between the United States and Russia. Policymakers would do well to consider—now—whether the potential risks are worth the hoped-for operational advantages.
With the exception of the advent of nuclear weapons, [history counsels skepticism](https://warontherocks.com/2020/05/the-future-of-warfare-will-continue-to-be-human/) regarding the many recurrent predictions of the revolutionary impact of technological innovation on the character of warfare. Maintaining objectivity and historically-informed skepticism is no less warranted when considering the potential battlefield effects of AWS, or systems that, once activated, are intended to [select and engage targets (PDF)](https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf) without further intervention or guidance from a human. Still, as we
... (truncated, 26 KB total)6c5808053763eb45 | Stable ID: YTdhOGE0Yz