Back
CnAISDA launched February 2025
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Carnegie Endowment
Relevant for understanding how AI safety institutionalization is developing in China, and what this means for global coordination efforts; useful context for anyone studying international AI governance dynamics.
Metadata
Importance: 55/100organizational reportanalysis
Summary
This Carnegie Endowment article examines the founding of the China AI Safety and Development Association (CnAISDA) in February 2025, exploring how leading Chinese AI researchers established a domestic AI safety institute. It analyzes the motivations, structure, and priorities of Chinese AI safety efforts, and what this means for global AI governance.
Key Points
- •CnAISDA was launched in February 2025 by prominent Chinese AI researchers as a non-governmental AI safety organization.
- •The institute reflects growing domestic awareness in China of AI risks, separate from state-led regulatory bodies.
- •Its formation signals a potential convergence or parallel development of AI safety culture in China alongside Western institutions.
- •The article explores how CnAISDA's agenda may differ from Western AI safety priorities, including emphasis on social stability and content governance.
- •The development has implications for international AI safety coordination and whether US-China cooperation on safety is feasible.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| China AI Regulatory Framework | Policy | 57.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 202672 KB
How Some of China’s Top AI Thinkers Built Their Own AI Safety Institute | Carnegie Endowment for International Peace Source : Getty
Paper How Some of China’s Top AI Thinkers Built Their Own AI Safety Institute
The emergence of the China AI Safety and Development Association (CnAISDA) is a pivotal moment for China’s frontier AI governance. How it navigates substantial domestic challenges and growing geopolitical tensions will shape conversations on frontier AI risks in China and abroad.
Link Copied By Scott Singer , Karson Elmgren , Oliver Guest Published on Jun 16, 2025 Additional Links
Full Text (PDF) Program
Technology and International Affairs
The Technology and International Affairs Program develops insights to address the governance challenges and large-scale risks of new technologies. Our experts identify actionable best practices and incentives for industry and government leaders on artificial intelligence, cyber threats, cloud security, countering influence operations, reducing the risk of biotechnologies, and ensuring global digital inclusion.
Learn More Executive Summary
Since the January 2025 release of the DeepSeek-R1 open-source reasoning model, China has increasingly prioritized leveraging artificial intelligence (AI) as a key engine for economic growth, encouraged AI diffusion domestically, and continued to pursue self-sufficiency across the AI stack. Yet while China has been investing heavily in AI development and deployment, it has also begun to talk more concretely about catastrophic risks from frontier AI and the need for international coordination. The February 2025 launch of the China AI Safety and Development Association (CnAISDA, 中国人工智能发展与安全研究网络)—China’s self-described counterpart to the AI safety institutes (AISIs) that the United Kingdom, United States, and other countries have launched over the last two years—offers a critical data point on the state of China’s rapidly evolving AI safety conversation.
Despite its potential importance, little has been publicly reported on CnAISDA. What is it? How did it come about? And what does it signal about the direction of Chinese AI policy more broadly? This paper provides the first comprehensive analysis of these questions.
What is CnAISDA?
Function. As of this writing, CnAISDA’s primary function is to represent China in international AI conversations, including those with other AISIs, underscoring China’s willingness to engage on frontier AI issues outside its traditionally preferred venue, the United Nations. Unlike the United Kingdom’s and United States’ AISIs, CnAISDA does not currently appear to be structured to carry out substantial domestic functions, such as independently testing and evaluating frontier AI models.
Structure. CnAISDA integrates multiple existing Chinese AI-focused institutions into a network structure. Rather than being a new stand-alone agency to govern AI, CnAISDA is more of a coalition to represent China abroad, as well as to advi
... (truncated, 72 KB total)Resource ID:
0f17105b7e24c08a | Stable ID: N2U5M2QwNz