Skip to content
Longterm Wiki
Back

academic research on US-China AI perspectives

paper

Authors

Akash Wasil·Tim Durgin

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Academic research analyzing how US and Chinese AI experts conceptualize safety, security, and extreme risks from advanced AI, with implications for international cooperation on AI governance.

Paper Details

Citations
0
0 influential
Year
2024

Metadata

arxiv preprintprimary source

Abstract

The United States and China will play an important role in navigating safety and security challenges relating to advanced artificial intelligence. We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence, extreme risks from AI, and the potential for international cooperation. Specifically, we compiled publicly-available statements from major technical and policy leaders in both the United States and China. We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI), that may have the most significant impacts on national and global security. Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control. Both countries have also launched early efforts designed to promote international cooperation around safety standards and risk management practices. Notably, our findings only reflect information from publicly available sources. Nonetheless, our findings can inform policymakers and researchers about the state of AI discourse in the US and China. We hope such work can contribute to policy discussions around advanced AI, its global security threats, and potential international dialogues or agreements to mitigate such threats.

Summary

This research analyzes publicly available statements from technical and policy leaders in the United States and China to understand their perspectives on AI safety and security challenges, particularly regarding advanced AI systems like artificial general intelligence (AGI). The study finds that experts in both countries share concerns about AGI risks, intelligence explosions, and loss of human control over AI systems, and notes that both nations have initiated early efforts toward international cooperation on safety standards and risk management. The findings aim to inform policymakers and researchers about the current state of AI discourse in both countries and support discussions on mitigating global AI security threats.

Cited by 1 page

PageTypeQuality
China AI Regulatory FrameworkPolicy57.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202631 KB
# US-China perspectives on extreme AI risks and global governance

Akash R. Wasil111Author correspondence: aw1404@georgetown.edu

Georgetown University

Tim Durgin

Independent

###### Abstract

The United States and China will play an important role in navigating safety and security challenges relating to advanced artificial intelligence. We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence, extreme risks from AI, and the potential for international cooperation. Specifically, we compiled publicly-available statements from major technical and policy leaders in both the United States and China. We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI), that may have the most significant impacts on national and global security. Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control. Both countries have also launched initial efforts designed to promote international cooperation around safety standards and risk management practices. Notably, our findings only reflect information from publicly available sources. Nonetheless, our findings can inform policymakers and researchers about the state of AI discourse in the US and China. We hope such work can contribute to policy discussions around advanced AI, its global security threats, and potential international dialogues or agreements to mitigate such threats.

## 1 Introduction

Artificial intelligence is a transformative technology with major implications for national and global security. Many AI experts have expressed concerns about AI-related national and global security threats. Examples include risks from AI-enabled biological weapons, risks from autonomous AI systems that escape human control, and risks from AI applied in military operations (Bengio et al., 2024; Hendrycks et al., 2023).

The United States and China are the world’s leaders in AI development. In 2022, the size of the AI market in the US amounted to USD 103.7B (Statista-1, 2024), while the size of the Chinese AI market was approximately USD 40B (Statista-2, 2024). As of 2022, 26 percent of the world’s top AI researchers came from China, while 28 percent came from the United States (Yang 2024). US companies dominate the top of the large language model (LLM) rankings: examples include Open AI’s Chat-GPT, Anthropic’s Claude, and Google’s Gemini (Guinness, 2024). While most Chinese LLMs lag quite a bit behind, there has been progress in recent years. Specifically, Moonshot AI’s Kimi model, under certain conditions, can achieve performance comparable to Chat GPT-4 (Zhang, 2023).

We aimed to acquire a better understanding of AI policy discourse in China and the United States. Discourse about US-China relations is often partisan, rather than grounded in objective data. Discussions about US-China AI policy

... (truncated, 31 KB total)
Resource ID: 813491c901111d6a | Stable ID: ZjE3Mzg4Nj