US-China perspectives on extreme AI risks and global governance
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Comparative analysis of US and China expert perspectives on extreme AI risks and international governance, providing insights into how leading technical and policy figures in each country conceptualize AI safety challenges and cooperation possibilities.
Paper Details
Metadata
Abstract
The United States and China will play an important role in navigating safety and security challenges relating to advanced artificial intelligence. We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence, extreme risks from AI, and the potential for international cooperation. Specifically, we compiled publicly-available statements from major technical and policy leaders in both the United States and China. We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI), that may have the most significant impacts on national and global security. Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control. Both countries have also launched early efforts designed to promote international cooperation around safety standards and risk management practices. Notably, our findings only reflect information from publicly available sources. Nonetheless, our findings can inform policymakers and researchers about the state of AI discourse in the US and China. We hope such work can contribute to policy discussions around advanced AI, its global security threats, and potential international dialogues or agreements to mitigate such threats.
Summary
This study analyzes publicly available statements from technical and policy leaders in the United States and China to understand how experts in each country perceive safety and security threats from advanced AI, particularly artificial general intelligence (AGI). The research finds that experts in both countries share concerns about AGI risks, intelligence explosions, and loss of human control over AI systems. Both nations have initiated early efforts toward international cooperation on safety standards and risk management. The findings aim to inform policymakers and researchers about AI safety discourse in these two major powers and support discussions on mitigating global AI security threats through potential international agreements.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| China AI Regulatory Framework | Policy | 57.0 |
Cached Content Preview
HTML conversions [sometimes display errors](https://info.dev.arxiv.org/about/accessibility_html_error_messages.html) due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
- failed: CJKutf8
- failed: tgtermes
- failed: tabularray
Authors: achieve the best HTML results from your LaTeX submissions by following these [best practices](https://info.arxiv.org/help/submit_latex_best_practices.html).
[License: CC BY 4.0](https://info.arxiv.org/help/license/index.html#licenses-available)
arXiv:2407.16903v1 \[cs.CY\] 23 Jun 2024
\\UseTblrLibrary
booktabs
{CJK}UTF8gbsn
Report issue for preceding element
# US-China perspectives on extreme AI risks and global governance
Report issue for preceding element
Akash R. Wasil111Author correspondence: aw1404@georgetown.edu
Georgetown University
Tim Durgin
Independent
Report issue for preceding element
###### Abstract
Report issue for preceding element
The United States and China will play an important role in navigating safety and security challenges relating to advanced artificial intelligence. We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence, extreme risks from AI, and the potential for international cooperation. Specifically, we compiled publicly-available statements from major technical and policy leaders in both the United States and China. We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI), that may have the most significant impacts on national and global security. Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control. Both countries have also launched initial efforts designed to promote international cooperation around safety standards and risk management practices. Notably, our findings only reflect information from publicly available sources. Nonetheless, our findings can inform policymakers and researchers about the state of AI discourse in the US and China. We hope such work can contribute to policy discussions around advanced AI, its global security threats, and potential international dialogues or agreements to mitigate such threats.
Report issue for preceding element
## 1 Introduction
Report issue for preceding element
Artificial intelligence is a transformative technology with major implications for national and global security. Many AI experts have expressed concerns about AI-related national and global security threats. Examples include risks from AI-enabled biological weapons, risks from autonomous AI systems that escape human control, and risks from AI applied in military operations (Bengio et al., 2024; Hendrycks et al., 2023).
Report issue for p
... (truncated, 34 KB total)99faf15f92366b6f | Stable ID: YmMxNDc3OT