Skip to content
Longterm Wiki
Back

The Malicious Use of AI Report

web
maliciousaireport.com·maliciousaireport.com/

Originally published in 2018 by a coalition including Future of Humanity Institute, OpenAI, and Center for a New American Security; considered a foundational document on AI misuse risks and dual-use concerns.

Metadata

Importance: 72/100organizational reportanalysis

Summary

A report examining how AI technologies can be exploited by malicious actors across cybersecurity, physical security, and political domains. It analyzes near-term threats from AI misuse and offers recommendations for researchers, policymakers, and industry to mitigate these risks.

Key Points

  • Identifies three major threat domains: digital security (cyberattacks), physical security (autonomous weapons, drones), and political security (disinformation, surveillance).
  • Argues that AI lowers the cost and expertise required for malicious actors to conduct sophisticated attacks at scale.
  • Recommends dual-use research norms, responsible disclosure practices, and proactive engagement between AI researchers and security communities.
  • Highlights that AI can automate and scale social engineering, spear-phishing, and fake media generation for influence operations.
  • Calls for policymakers to anticipate and prepare for AI-enabled threats rather than react after harms occur.

Cited by 1 page

Resource ID: 5fde590180ca07a6 | Stable ID: ZmJmMmFlNT