Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Lawfare

Useful policy analysis for those exploring international AI governance frameworks, drawing direct parallels to nuclear and chemical weapons nonproliferation regimes and their real-world shortcomings.

Metadata

Importance: 62/100opinion pieceanalysis

Summary

This Lawfare analysis by Akash Wasil examines whether the International Atomic Energy Agency (IAEA) model could serve as a template for international AI governance, using case studies from Iran, Syria, and Russia to identify both the strengths and significant limitations of such institutions. The piece argues that any 'IAEA for AI' proposal must seriously grapple with well-documented verification and enforcement challenges faced by the IAEA and OPCW.

Key Points

  • Nations at the 2023 AI Safety Summit acknowledged potential for 'serious, even catastrophic harm' from advanced AI, including deliberate misuse for bioweapons and loss of human control.
  • Scholars and figures like Sam Altman have proposed using the IAEA as a model for international AI governance institutions.
  • Case studies from Iran, Syria, and Russia reveal significant verification and enforcement limitations in existing arms control institutions like the IAEA and OPCW.
  • The White House 2024 national security memorandum on AI already directs multilateral engagement strategies, signaling active interest in international AI governance frameworks.
  • Any viable international AI governance regime must account for the challenges of verifying compliance and enforcing agreements among competing nation-states.

Cited by 1 page

PageTypeQuality
International Compute RegimesConcept67.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202625 KB
- ![Akash  Wasil](https://lawfare-assets-new.azureedge.net/assets/images/default-source/article-images/aw_photo_circle.jpeg?sfvrsn=759ff65c_3)





## [Akash Wasil](https://www.lawfaremedia.org/contributors/awasil)


* * *

[Meet The Authors](https://www.lawfaremedia.org/article/do-we-want-an--iaea-for-ai#postContributors)

[Subscribe to Lawfare](https://www.lawfaremedia.org/subscribe)

In November 2023, nations at the first global AI Safety Summit [recognized the possibility of “serious, even catastrophic harm”](https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023) from advanced artificial intelligence (AI). Some of the risks identified stem from deliberate misuse. For example, a nation could decide to instruct an advanced AI system to develop novel biological weapons or cyberweapons; [Anthropic CEO Dario Amodei testified in 2023](https://www.judiciary.senate.gov/imo/media/doc/2023-07-26_-_testimony_-_amodei.pdf) that AI systems would be able to greatly expand threats from “large-scale biological attacks” within two to three years. Other risks mentioned arise from unintentional factors— [experts have warned](https://hdsr.mitpress.mit.edu/pub/w974bwb0/release/2), for instance, that AI systems could become powerful enough to subvert human control. A race toward superintelligent AI could lead to the creation of highly powerful and dangerous systems before scientists have developed the [safeguards and technical understanding required to control them](https://briefy.ai/summary/v2/gnlbqtqimkcmlkc5izxaf7dm/en/yoshua-bengio-understanding-and-mitigating-loss-of-control).

Many proposals to mitigate these risks have focused on the importance of international coordination. The recent White House [national security memorandum on AI](https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/), for example, directs the Department of State to form an international AI governance strategy that outlines multilateral engagement with allies, partners, and competitors. As international AI governance discussions advance, nations may consider how certain kinds of dangerous AI development could be restricted and how such agreements could be [verified](https://arxiv.org/abs/2408.16074).

Accordingly, [some scholars](https://carnegieendowment.org/research/2024/03/envisioning-a-global-regime-complex-to-govern-artificial-intelligence?lang=en)—and public [figures such as OpenAI CEO Sam Altman](https://studentreview.hks.harvard.edu/an-interview-with-openai-ceo-sam-altman/)—have [turned to the International Atomic Energy Agency (IAEA)](https://www.nature.com/articles/s42254-023-00670-4) as a potential [model for international AI institutions](https://p

... (truncated, 25 KB total)
Resource ID: 6f171f833897de2c | Stable ID: MmVmOTg0OT