Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Future of Humanity Institute

A landmark agenda-setting document from FHI's Centre for the Governance of AI (now an independent organization), widely cited as a foundational reference for researchers entering the AI governance field.

Metadata

Importance: 82/100organizational reportprimary source

Summary

Published by the Centre for the Governance of AI at FHI Oxford, this foundational report outlines a comprehensive research agenda for AI governance, identifying key challenges around safety, fairness, security, and the concentration of power. It maps the landscape of open questions spanning technical, institutional, and geopolitical dimensions, providing a structured framework for researchers and policymakers to prioritize governance work.

Key Points

  • Identifies three core problem clusters for AI governance: ensuring AI is beneficial, preventing misuse, and managing systemic risks from AI development trajectories.
  • Distinguishes between near-term governance challenges (bias, accountability, liability) and long-term challenges (superintelligence, power concentration, existential risk).
  • Emphasizes the need for international cooperation and robust institutions to govern transformative AI, drawing parallels to other global governance regimes.
  • Calls for interdisciplinary research spanning computer science, law, political science, economics, and philosophy to address governance gaps.
  • Highlights the importance of technical AI safety research being complemented by governance frameworks to ensure beneficial deployment at scale.

Cited by 1 page

PageTypeQuality
Governance-Focused WorldviewConcept67.0
Resource ID: c2e15e64323078f5 | Stable ID: MzAzOTc3MT