The Malicious Use of AI - Future of Humanity Institute
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Co-authored by 26 researchers from FHI, CSER, and OpenAI, this 2018 report is one of the most widely cited works on AI misuse risks and helped establish dual-use governance as a serious field of inquiry within AI safety.
Paper Details
Metadata
Abstract
This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders.
Summary
A landmark 2018 report from the Future of Humanity Institute, Centre for the Study of Existential Risk, and OpenAI analyzing how AI could be misused by malicious actors across digital, physical, and political domains. It forecasts emerging threats over the next 5-10 years and proposes recommendations for researchers, policymakers, and industry to mitigate dual-use risks. The report is widely cited as a foundational framework for thinking about AI misuse and governance.
Key Points
- •Identifies three primary threat domains: digital security (cyberattacks, malware), physical security (autonomous weapons, drones), and political security (disinformation, manipulation).
- •Argues AI lowers the cost and expertise barrier for malicious actors, enabling attacks at unprecedented scale and speed.
- •Calls on AI researchers to treat dual-use concerns as a core professional responsibility, similar to biosecurity norms in biology.
- •Recommends red-teaming, restricted publication norms, and closer collaboration between AI researchers and security communities.
- •Highlights that AI-enabled disinformation and synthetic media (deepfakes) pose novel and underappreciated political risks.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Proliferation | Risk | 60.0 |
Cached Content Preview
# Computer Science > Artificial Intelligence
**arXiv:1802.07228** (cs)
\[Submitted on 20 Feb 2018 ( [v1](https://arxiv.org/abs/1802.07228v1)), last revised 1 Dec 2024 (this version, v2)\]
# Title:The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
Authors: [Miles Brundage](https://arxiv.org/search/cs?searchtype=author&query=Brundage,+M), [Shahar Avin](https://arxiv.org/search/cs?searchtype=author&query=Avin,+S), [Jack Clark](https://arxiv.org/search/cs?searchtype=author&query=Clark,+J), [Helen Toner](https://arxiv.org/search/cs?searchtype=author&query=Toner,+H), [Peter Eckersley](https://arxiv.org/search/cs?searchtype=author&query=Eckersley,+P), [Ben Garfinkel](https://arxiv.org/search/cs?searchtype=author&query=Garfinkel,+B), [Allan Dafoe](https://arxiv.org/search/cs?searchtype=author&query=Dafoe,+A), [Paul Scharre](https://arxiv.org/search/cs?searchtype=author&query=Scharre,+P), [Thomas Zeitzoff](https://arxiv.org/search/cs?searchtype=author&query=Zeitzoff,+T), [Bobby Filar](https://arxiv.org/search/cs?searchtype=author&query=Filar,+B), [Hyrum Anderson](https://arxiv.org/search/cs?searchtype=author&query=Anderson,+H), [Heather Roff](https://arxiv.org/search/cs?searchtype=author&query=Roff,+H), [Gregory C. Allen](https://arxiv.org/search/cs?searchtype=author&query=Allen,+G+C), [Jacob Steinhardt](https://arxiv.org/search/cs?searchtype=author&query=Steinhardt,+J), [Carrick Flynn](https://arxiv.org/search/cs?searchtype=author&query=Flynn,+C), [Seán Ó hÉigeartaigh](https://arxiv.org/search/cs?searchtype=author&query=h%C3%89igeartaigh,+S+%C3%93), [SJ Beard](https://arxiv.org/search/cs?searchtype=author&query=Beard,+S), [Haydn Belfield](https://arxiv.org/search/cs?searchtype=author&query=Belfield,+H), [Sebastian Farquhar](https://arxiv.org/search/cs?searchtype=author&query=Farquhar,+S), [Clare Lyle](https://arxiv.org/search/cs?searchtype=author&query=Lyle,+C), [Rebecca Crootof](https://arxiv.org/search/cs?searchtype=author&query=Crootof,+R), [Owain Evans](https://arxiv.org/search/cs?searchtype=author&query=Evans,+O), [Michael Page](https://arxiv.org/search/cs?searchtype=author&query=Page,+M), [Joanna Bryson](https://arxiv.org/search/cs?searchtype=author&query=Bryson,+J), [Roman Yampolskiy](https://arxiv.org/search/cs?searchtype=author&query=Yampolskiy,+R), [Dario Amodei](https://arxiv.org/search/cs?searchtype=author&query=Amodei,+D)
View a PDF of the paper titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, by Miles Brundage and 25 other authors
[View PDF](https://arxiv.org/pdf/1802.07228)
> Abstract:This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We al
... (truncated, 9 KB total)14e0d91b4194cd13 | Stable ID: MTViNGY4NT