Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Carnegie Endowment

Relevant for understanding how AI safety governance discourse is globalizing, and whether convergence between Chinese and Western AI firms on safety language represents meaningful progress or performative compliance with reputational norms.

Metadata

Importance: 45/100organizational reportanalysis

Summary

This Carnegie Endowment analysis examines how Chinese AI companies, including DeepSeek, are increasingly adopting safety commitments and responsible AI language similar to Western counterparts. It explores whether this convergence reflects genuine alignment on AI safety norms or primarily serves regulatory and reputational purposes, with implications for global AI governance.

Key Points

  • Chinese AI firms like DeepSeek are publicly adopting safety pledges and responsible AI frameworks that mirror Western company commitments.
  • The convergence in safety rhetoric may create opportunities for international coordination but questions remain about enforcement and sincerity.
  • Content control priorities differ significantly between Chinese and Western firms, complicating comparisons of 'safety' definitions.
  • The piece highlights how geopolitical competition shapes how AI safety commitments are framed and marketed by different national actors.
  • Voluntary commitments by AI companies globally face scrutiny over whether they translate into meaningful technical or behavioral changes.

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202628 KB
![Floor signage for the offices of DeepSeek (C) is seen in Beijing on January 28, 2025. ](https://assets.carnegieendowment.org/_/eyJrZXkiOiJzdGF0aWMvbWVkaWEvaW1hZ2VzL2RlZXBzZWVrLWFpLXRlY2hub2xvZ3ktR2V0dHlJbWFnZXMtMjE5NTc5NzE2NC5qcGcifQ==)

Source: Getty

Article

## DeepSeek and Other Chinese Firms Converge with Western Companies on AI Promises

The AI race is breaking open. An upcoming summit offers an opportunity to U.S. and Chinese companies to agree on safety and security measures.

Link Copied

By[Scott Singer](https://carnegieendowment.org/people/scott-singer)

Published onJan 28, 2025

DeepSeek, a previously little-known Chinese AI start-up, sent shockwaves globally when it released one of the world’s best performing open-source generative AI models last week. As Chinese frontier AI capabilities rapidly grow, so do the risks that a Chinese model could be powerful enough to cause global harm. An earlier, less powerful version of DeepSeek’s model [was shown to be easily jailbroken](https://techcrunch.com/2024/11/20/a-chinese-lab-has-released-a-model-to-rival-openais-o1/) and reportedly provided a user with a recipe for methamphetamine.

Despite growing global concern around large-scale risks, the U.S. and Chinese governments have made little progress on a bilateral agreement to regulate frontier AI. But a surprising consensus among leading AI developers in both countries around the need for safeguards has quietly emerged, including DeepSeek.

Last month, DeepSeek joined sixteen other Chinese companies in [signing onto the Artificial Intelligence Safety Commitments](https://mp.weixin.qq.com/s/s-XFKQCWhu0uye4opgb3Ng) ( _人工智能安全承诺_). While branded as a domestic Chinese initiative, the commitments bear strong similarity to ongoing global industry-led efforts to put safeguards in place for frontier AI piloted at last year’s AI Summit in Seoul, known as the [Seoul Commitments.](https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024) Using similar language, both sets of commitments outline promises to conduct red-teaming exercises to identify severe threats, provide transparency into frontier model capabilities and limitations, and build organization structures to promote the security of frontier systems.

The distinctive similarities between China’s AI Safety Commitments and the Seoul Commitments could set the stage for further global convergence among industry actors on best practices at the upcoming Paris AI Action Summit, scheduled for mid-February. Only one Chinese company, Zhipu.ai, signed onto the Seoul Commitments at the Seoul summit last year. If more Chinese firms announce they will sign onto the Seoul Commitments in Paris, they could help lay the groundwork for a global industry-based international consensus on AI. This consensus could offer an informal pathway to build foundational rules of the road in international AI governance.

### As Frontier

... (truncated, 28 KB total)
Resource ID: e3274b108aac1712 | Stable ID: MWQwZDJhNj