Skip to content
Longterm Wiki

The Centre for the Governance of AI (GovAI) is a research organization focused on AI policy and governance. Originally part of the Future of Humanity Institute at Oxford, GovAI began operating independently in 2021 and formally incorporated as a separate entity before FHI closed in April 2024. Led by Director Ben Garfinkel with Markus Anderljung directing policy and research.

Revenue
$5.7 million
as of 2024
Headcount
40–45
as of 2025
Total Funding Raised
$13.3 million
as of 2025

Facts

17
Financial
Grant Received$3.0 million
Headcount40–45
Total Funding Raised$13.3 million
Revenue$5.7 million
Annual Expenses$919,661
Political
Lobbying Spend$0
General
Websitehttps://governance.ai
Products & Usage
Publication Count50
Organization
Founded Date2016
CountryUnited Kingdom
Other
Advisory BoardAjeya Cotra, Allan Dafoe, Helen Toner, Tasha McCauley, Toby Ord
ProgramGovAI Fellowship — competitive research fellowship program bringing early-career researchers to work on AI governance for 3-12 months. 100+ alumni placed across DeepMind, OpenAI, Anthropic, government agencies, and think tanks.
Legal IdentifierUK Companies House #15883729 (Company Limited by Guarantee)
PublicationComputing Power and the Governance of Artificial Intelligence — argues compute is the most governable AI pillar, proposes international monitoring mechanisms
Policy InfluenceVice-Chair role on the EU GPAI Code of Practice drafting process (2024-2025)
Independence Date2021

Other Data

Publications
23 entries
TitlePublicationTypeAuthorsUrlPublishedDateIsFlagshipVenue
Frontier AI Auditing: Toward Rigorous Third-Party AssessmentpaperBrundage, Dreksler, Homewood, McGregor et al.governance.ai2026-01
Forecasting LLM-Enabled Biorisk and the Efficacy of SafeguardspaperWilliams, Righetti, Rosenberg et al.governance.ai2025-07
Third-Party Compliance Reviews for Frontier AI Safety FrameworkspaperHomewood, Williams, Dreksler, Lidiard, Garfinkel, Schuett et al.governance.ai2025-05
Infrastructure for AI AgentspaperChan, Wei, Huang, Rajkumar, Perrier, Lazar, Hadfield, Anderljunggovernance.ai2025-01
IDs for AI SystemspaperChan, Kolt, Wills, Anwar, Schroeder de Witt, Rajkumar, Hammond, Krueger, Heim, Anderljunggovernance.ai2024-10
Safety Cases for Frontier AIpaperBuhl, Sett, Koessler, Schuett, Anderljunggovernance.ai2024-10
A Grading Rubric for AI Safety FrameworkspaperAlaga, Schuett, Anderljunggovernance.ai2024-09
From Principles to Rules: A Regulatory Approach for Frontier AIpaperSchuett, Anderljung, Carlier, Koessler, Garfinkelgovernance.ai2024-08
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of LLMspaperEloundou, Manning, Mishkin, Rockgovernance.ai2024-06
Visibility into AI AgentspaperChan, Ezell, Kaufmann, Wei, Hammond, Bradley, Bluemke, Rajkumar, Krueger, Kolt, Heim, Anderljunggovernance.ai2024-06
Risk Thresholds for Frontier AIpaperKoessler, Schuett, Anderljunggovernance.ai2024-06
Societal Adaptation to Advanced AIpaperBernardi, Mukobi, Greaves, Heim, Anderljunggovernance.ai2024-05
Governing Through the Cloud: The Intermediary Role of Compute Providers in AI RegulationpaperHeim, Fist, Egan, Huang, Zekany, Trager, Osborne, Zilbermangovernance.ai2024-03
Computing Power and the Governance of Artificial IntelligencepaperSastry, Heim, Anderljung et al.arxiv.org2024-02arXiv
Computing Power and the Governance of AIpaperLennart Heim et al.governance.ai2024-02
What Should Be Internationalised in AI Governance?paperRobert Trager, Ben Garfinkel, et al.governance.ai2024
Frontier AI Regulation: Managing Emerging Risks to Public SafetypaperMarkus Anderljung, Joslyn Barnhart, Anton Korinek, et al.governance.ai2023-11
Three Lines of Defense Against Risks from AIpaperJonas Schuettgovernance.ai2023-10
Open-Sourcing Highly Capable Foundation ModelspaperElizabeth Seger, Noemi Dreksler, Richard Moulange, et al.governance.ai2023-09
International Governance of Civilian AI: A Jurisdictional Certification ApproachpaperTrager, Harack, Reuel, Carnegie, Heim, Ho et al.governance.ai2023-08
Frontier AI Regulation: Managing Emerging Risks to Public SafetypaperAnderljung et al.arxiv.org2023-07arXiv
Model Evaluation for Extreme RiskspaperShevlane, Farquhar, Garfinkel et al.arxiv.org2023-05arXiv
Model Evaluation for Extreme RiskspaperToby Shevlane, Sebastian Farquhar, Ben Garfinkel, et al.arxiv.org2023-05

Divisions

2

Related Wiki Pages

Top Related Pages

Approaches

Structured Access / API-OnlyAI Governance Research and Analysis

Analysis

AI Safety Intervention Effectiveness MatrixAI Risk Interaction Matrix

Policy

China AI Regulatory Framework

Risks

AI Development Racing DynamicsAI Proliferation

Organizations

Future of Humanity InstituteGeorgetown CSETUS AI Safety InstituteFAR AICentre for Effective AltruismAnthropic

Concepts

Compute Thresholds

Other

Lennart HeimAllan DafoeMarkus Anderljung

Key Debates

AI Accident Risk Cruxes

Historical

International AI Safety Summit Series