Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Carnegie Endowment

Relevant for researchers and policymakers tracking international AI governance developments, particularly the divergence and potential convergence between Chinese and Western approaches to AI safety regulation as of mid-2024.

Metadata

Importance: 55/100organizational reportanalysis

Summary

This Carnegie Endowment for International Peace analysis examines China's emerging regulatory framework for artificial intelligence safety, covering how Chinese authorities are approaching AI governance, risk management, and safety standards. It provides comparative context for understanding how China's approach differs from Western regulatory models.

Key Points

  • Analyzes China's distinct regulatory approach to AI safety, including sector-specific rules and centralized oversight mechanisms
  • Examines how Chinese AI governance balances innovation promotion with safety and security concerns
  • Explores implications of China's AI safety regulations for international coordination and global AI governance
  • Considers how China's regulatory frameworks address generative AI and foundation model risks specifically
  • Provides context for US-China divergence in AI safety standards and potential areas of alignment

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202632 KB
![Blue and green lights in wavy lines across the bottom third of the image on a black background](https://assets.carnegieendowment.org/_/eyJrZXkiOiJzdGF0aWMvbWVkaWEvaW1hZ2VzL1RlY2hub2xvZ3ktQUktaVN0b2NrLTE2NjgyNDczNDYtZWRpdC5wbmcifQ==)

iStock

Article

## China’s Views on AI Safety Are Changing—Quickly

Beijing’s AI safety concerns are higher on the priority list, but they remain tied up in geopolitical competition and technological advancement.

Link Copied

By[Matt Sheehan](https://carnegieendowment.org/people/matt-sheehan)

Published onAug 27, 2024

Over the past two years, China’s artificial intelligence (AI) ecosystem has undergone a significant shift in how it views and discusses AI safety. For many years, some of the leading AI scientists in Western countries have been warning that future AI systems could become powerful enough to pose [catastrophic risks](https://www.safe.ai/work/statement-on-ai-risk) to humanity. Concern over these risks—often grouped under the umbrella term “AI safety”—has sparked new fields of technical research and led to the creation of governmental AI safety institutes in the [United States](https://www.nist.gov/aisi), the [United Kingdom](https://www.aisi.gov.uk/), and elsewhere. But for most of the past five years, it was unclear whether these concerns about extreme risks were shared by Chinese scientists or policymakers.

Today, there is mounting evidence that China does indeed share these concerns. A growing number of research papers, public statements, and government documents suggest that China is treating AI safety as an increasingly urgent concern, one worthy of significant technical investment and potential regulatory interventions. Momentum around AI safety first began to build within China’s elite technical community, and it now appears to be gaining some traction in the country’s top policy circles. In a potentially significant move, the Chinese Communist Party (CCP) released a major policy document in July 2024 that included a call to create “oversight systems to ensure the safety of artificial intelligence.”

There remain major open questions about the specific contours of China’s concerns over AI safety and what it intends to do about them. But the growing political and technical salience of these issues is significant for AI safety and governance globally. China is the key competitor for the United States in advanced AI, and that competition is a core dynamic shaping AI development globally. China’s leaders are acutely concerned with falling further behind the United States and are pushing hard to catch up in advanced AI. How China approaches building those frontier AI systems—the risks it sees and the safeguards it builds in—will influence the safety of systems built in China and around the world.

### Contested Science and Translation Troubles

Before examining the evidence of this shift in China, it is important to note that, globally, AI safety remains a deeply contested concept on both a technol

... (truncated, 32 KB total)
Resource ID: d0e36601100c356d | Stable ID: MjMwMzRkMT