Skip to content
Longterm Wiki
Back

OECD AI Policy Observatory (https://oecd.ai/en/)

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: OECD

The OECD AI Policy Observatory is a key intergovernmental resource for AI governance; its AI Principles are widely cited in policy and safety discussions as a baseline international standard for responsible AI development and deployment.

Metadata

Importance: 62/100homepage

Summary

The OECD AI Policy Observatory is a global hub for AI policy information, analysis, and tools, supporting the implementation of the OECD AI Principles adopted by over 40 countries. It tracks national AI strategies, regulations, and initiatives while providing data and analysis to help governments shape trustworthy and beneficial AI policies.

Key Points

  • Central repository for AI policy developments, national strategies, and regulatory frameworks across OECD and partner countries
  • Hosts the OECD AI Principles — the first intergovernmental standard on AI, covering transparency, accountability, robustness, and human-centric values
  • Provides tools and datasets tracking AI investments, compute trends, incidents, and algorithmic accountability measures
  • Facilitates international coordination on AI governance through expert networks, working groups, and policy briefs
  • Serves as a reference point for policymakers, researchers, and civil society engaging with AI safety and governance debates

Cited by 1 page

PageTypeQuality
AI Policy EffectivenessAnalysis64.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20262 KB
# Policies, data and analysis for trustworthy artificial intelligence

In July 2024, the GPAI initiative and OECD member countries' work on AI joined forces under the GPAI brand to create an integrated partnership. [Find out more](https://oecd.ai/en/about/about-gpai).

## Featured

[![AI Futures](https://oecd.ai/en/assets/images/portals/mobile/foresight.png)\\
\\
**AI Futures**](https://oecd.ai/en/site/ai-futures)

[![AI Compute and the Environment](https://oecd.ai/en/assets/images/portals/mobile/carbon.png)\\
\\
**AI Compute and the Environment**](https://oecd.ai/en/site/compute-climate)

[![AI Risk & Accountability](https://oecd.ai/en/assets/images/portals/mobile/risks-and-ai-accountability.png)\\
\\
**AI Risk & Accountability**](https://oecd.ai/en/site/risk-accountability)

[![AI & Health](https://oecd.ai/en/assets/images/portals/mobile/health.jpg)\\
\\
**AI & Health**](https://oecd.ai/en/site/health)

[![ AI Incidents](https://oecd.ai/en/assets/images/portals/mobile/manage-incidents.png)\\
\\
**AI Incidents**](https://oecd.ai/en/site/incidents)

[![AI, Data & Privacy](https://oecd.ai/en/assets/images/portals/mobile/ai-data-privacy.png)\\
\\
**AI, Data & Privacy**](https://oecd.ai/en/site/data-privacy)

[![Generative AI](https://oecd.ai/en/assets/images/portals/mobile/genai.png)\\
\\
**Generative AI**](https://oecd.ai/en/genai)

![aim-logo](https://oecd.ai/en/assets/images/aim-black.svg)

### OECD AI Incidents Monitor (AIM)

AIM tracks AI incidents in the global press for valuable insights into existing AI risks.

[Explore AIM](https://oecd.ai/en/incidents)

### OECD Catalogue of Tools & Metrics for Trustworthy AI

To help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

[Visit the Catalogue](https://oecd.ai/en/catalogue/overview)
Resource ID: 911bfb85a74a3f2c | Stable ID: MjNlYzAxNT