Skip to content
Longterm Wiki
Back

AGILE Index on Global AI Safety Readiness

web

A February 2025 index from Chinese AI safety institutions benchmarking 40 nations on AI safety readiness; useful for comparative governance research but reflects a specific institutional and geopolitical perspective.

Metadata

Importance: 58/100organizational reportanalysis

Summary

The GIAIS is a systematic cross-national assessment framework evaluating 40 countries across six pillars: governance environment, national institutions, governance instruments, research status, international participation, and existential safety preemption. Developed by Chinese AI safety institutions under the AGILE framework, it finds that developed countries are better-prepared, international cooperation is nascent, and existential safety planning is lacking globally. It aims to serve as a diagnostic tool to identify gaps and encourage international coordination.

Key Points

  • Covers 40 countries across 6 pillars and 12 dimensions, providing one of the few structured comparative assessments of national AI safety readiness.
  • Key finding: no country scores well on existential safety preemption, highlighting a near-universal gap in long-term AI risk planning.
  • Developed countries generally outperform others, but the global AI safety environment is described as increasingly severe.
  • International AI safety cooperation is forming but lacks broad participation; governance instruments (laws, policies) exist only in select countries.
  • Produced by Beijing-AISI and the Chinese Academy of Sciences, offering a non-Western institutional perspective on global AI safety governance.

Cited by 1 page

PageTypeQuality
Agentic AICapability68.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202623 KB
# Global Index for AI Safety

AGILE Index on Global A/ Safety Readiness Feb 2025

GOVERNANCE ENVIRONMENT

![](https://www.agile-index.ai/images/9465410643584671e417a27d5450e9efe0c6eb437be296af3a627c34762fb195.jpg)

• Center for Long-term Artificial Intelligence (CLAI) • Beijing Institute of AI Safety and Governance (Beijing-AISI) • Beijing Key Laboratory of Safe AI and Superalignment • Institute of Automation, Chinese Academy of Sciences

As artificial intelligence (AI) technologies experience explosive growth and proliferate across global industries, their transformative potential is increasingly accompanied by complex safety and security risks. From malicious exploitation and deceptive applications to privacy breaches, unintended consequences, and existential risks, the dual-use nature of AI and its global impact demand comprehensive safeguards and strengthened international cooperation. Against this backdrop, understanding how countries navigate AI safety challenges—through policy innovation, technical safeguards, and multilateral cooperation—has become critical to shaping a safe and sustainable future.

Developed under the theoretical framework of AI Governance InternationaL Evaluation (AGILE) Index, this Global Index for AI Safety (GIAIS) provides a systematic assessment of national capabilities, current status and preparedness in addressing AI safety challenges. The evaluation of the index covers six pillars: Governance Environment for AI Safety, National Institutions Targeting AI Safety, Governance Instruments for AI Safety, Research Status on AI Safety, International Participation on AI Safety, and Existential Safety Preemption. It currently includes 12 dimensions to depict the governance status of AI safety readiness across 40 countries.

Through this assessment, we can found:

Developed countries are generally better-prepared in addressing AI safety challenges.

The global AI safety environment is becoming increasingly severe in recent years.

National AI safety institutions are rapidly emerging in various forms.

Related laws, policies, and tools are being implemented, but only in some countries.

AI safety research has surged, focusing on topics such as alignment and privacy security.

International AI safety cooperation is forming but needs wider participation.

AI existential safety preemption and planning are lacking in all countries.

The assessment does not seek to categorize countries as either paragons or laggards. AI’s safety challenges affect us all, and no country can solve them alone, no matter how well the country itself has done. The level of development on AI technology and its use may vary across countries due to their development status, while all countries should get more serious preparations on AI Safety due to its potential unpredictability, profound and deep negative impacts, as well as its proliferation characteristics. Therefore, the GIAIS acts as an assessment tool, helping countries recognize their circumstance

... (truncated, 23 KB total)
Resource ID: 54aec2bd9670c0f4 | Stable ID: MmU5MTEyNm