Skip to content
Longterm Wiki
Navigation
Updated 2025-12-28HistoryData
Page StatusContent
Edited 3 months ago1.7k words36 backlinksUpdated every 3 weeksOverdue by 77 days
43QualityAdequate •50.5ImportanceUseful55.5ResearchModerate
Content7/13
SummaryScheduleEntityEdit historyOverview
Tables14/ ~7Diagrams1/ ~1Int. links14/ ~13Ext. links7/ ~8Footnotes0/ ~5References1/ ~5Quotes0Accuracy0RatingsN:3.5 R:5 A:4 C:6.5Backlinks36
Issues3
QualityRated 43 but structure suggests 100 (underrated by 57 points)
Links1 link could use <R> components
StaleLast edited 98 days ago - may need review

GovAI

Safety Org

GovAI

GovAI is an AI policy research organization with ~40-45 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and currently holds Vice-Chair position in EU GPAI Code drafting. Their compute governance research has influenced regulatory thresholds across US, UK, and EU, with alumni now occupying key positions in frontier labs, think tanks, and government.

TypeSafety Org
Related
Organizations
Google DeepMind
People
Allan DafoeBen GarfinkelMarkus AnderljungRobert TragerJonas SchuettLennart HeimJade Leung
1.7k words · 36 backlinks

Overview

The Centre for the Governance of AI (GovAI) is one of the most influential AI policy research organizations globally, combining rigorous research with direct policy engagement at the highest levels. Originally founded as part of the Future of Humanity Institute at Oxford, GovAI became an independent nonprofit in 2023 when FHI closed, and subsequently relocated to London in 2024 to enhance its policy engagement capabilities.

GovAI's theory of impact centers on producing foundational research that shapes how governments and industry approach AI governance, while simultaneously training the next generation of AI governance professionals. Their 2018 research agenda helped define the nascent field of AI governance, and their subsequent work on compute governance has become a cornerstone of regulatory thinking in the US, UK, and EU. The organization receives substantial support from Coefficient Giving, with grants totaling over $1.8 million in 2023-2024 alone.

The organization's influence extends beyond research: GovAI alumni now occupy key positions across the AI governance landscape—in frontier AI labs (Google DeepMind, OpenAI, Anthropic), major think tanks (CSET, RAND), and government positions in the US, UK, and EU. Perhaps most significantly, GovAI's Director of Policy Markus Anderljung currently serves as Vice-Chair of the EU's General-Purpose AI Code of Practice drafting process, directly shaping how the world's first comprehensive AI law will be implemented.

Organization Profile

AttributeDetails
Founded2018 (as part of FHI); Independent 2023
LocationLondon, UK (moved from Oxford in 2024)
StructureIndependent nonprofit
Staff Size≈40-45 researchers and staff
Annual Budget≈$1-4M (estimated from grants)
Primary FunderCoefficient Giving ($1.8M+ in 2023-2024)
AffiliationsUS AI Safety Institute Consortium member

Key Metrics

MetricValueNotes
Publications in peer-reviewed venues50+Nature, Science, NeurIPS, International Organization
Fellowship alumni placed100+Since 2018
Government advisory engagementsUK, US, EUDirect policy input
Current policy rolesEU GPAI Code Vice-ChairMarkus Anderljung

Research Areas

GovAI's research spans four interconnected domains, with particular depth in compute governance where they have produced foundational work cited by policymakers globally.

Diagram (loading…)
flowchart TD
  subgraph RESEARCH["Research Domains"]
      CG[Compute Governance]
      IC[International Coordination]
      FR[Frontier AI Regulation]
      FB[Field Building]
  end

  subgraph OUTPUTS["Policy Outputs"]
      FRAME[Governance Frameworks]
      STAND[Standards Development]
      TALENT[Trained Researchers]
  end

  subgraph IMPACT["Real-World Impact"]
      UKPOL[UK Policy]
      EUPOL[EU AI Act]
      USPOL[US Executive Order]
  end

  CG --> FRAME
  IC --> STAND
  FR --> FRAME
  FB --> TALENT

  FRAME --> UKPOL
  FRAME --> EUPOL
  STAND --> USPOL
  TALENT --> UKPOL
  TALENT --> EUPOL
  TALENT --> USPOL

  style RESEARCH fill:#e6f3ff
  style OUTPUTS fill:#fff3e6
  style IMPACT fill:#e6ffe6

Compute Governance

GovAI's signature contribution is the compute governance framework—the idea that computing power, unlike data or algorithms, is physical, measurable, and therefore governable. Their February 2024 paper "Computing Power and the Governance of AI" (Anderljung, Heim, et al.) has become the definitive reference, cited in policy discussions from Washington to Brussels.

Research StreamKey PapersPolicy Impact
Compute thresholdsTraining Compute Thresholds (2024)Informed EU 10^25 FLOP threshold
Cloud governanceGoverning Through the Cloud (2024)Know-Your-Customer proposals
Hardware controlsChip Tracking Mechanisms (2023)Export control discussions
VerificationAI Verification (2023)International monitoring concepts

Lennart Heim, formerly GovAI's compute governance lead (now at RAND), regularly advises governments on implementation. His work demonstrates how compute provides a "governance surface"—a point where regulators can observe and influence AI development without requiring access to proprietary algorithms.

International Coordination

GovAI researches how nations can coordinate on AI governance despite competitive pressures. Their work on "AI Race Dynamics" examines why rational actors might collectively produce suboptimal outcomes, and what mechanisms might enable cooperation.

Research TopicKey FindingPolicy Relevance
Race dynamicsCompetitive pressure degrades safety investmentsSupports international coordination
Standards harmonizationTechnical standards can enable verificationInforms AI safety summits
Information sharingIncident reporting reduces collective riskModel for international registries

Frontier AI Regulation

Recent GovAI work focuses specifically on governing frontier AI—systems at or near the capability frontier that pose novel safety and security risks.

PublicationYearContribution
Frontier AI Regulation: Managing Emerging Risks2023Proposed tiered regulatory framework
Safety Cases for Frontier AI2024Framework for demonstrating system safety
Coordinated Pausing Scheme2024Evaluation-based pause mechanism for dangerous capabilities

GovAI collaborated with UK AISI on safety case sketches for offensive cyber capabilities, demonstrating practical application of their theoretical frameworks.

Field Building

GovAI runs competitive fellowship programs that have trained 100+ AI governance researchers since 2018. The fellowship provides mentorship from leading experts and has become a primary talent pipeline for the field.


Key People

GovAI's leadership combines academic rigor with policy experience. Several former team members have moved to positions of significant influence.

Current Leadership

Ben Garfinkel
Director
Markus Anderljung
Director of Policy & Research
Emma Bluemke
Research Manager
Anton Korinek
Economics of AI Lead

Leadership Profiles

PersonRoleBackgroundNotable Contributions
Ben GarfinkelDirectorDPhil Oxford (IR); Former OpenAI consultantSets organizational direction; security implications research
Markus AnderljungDirector of PolicyEY Sweden; UK Cabinet Office secondeeEU GPAI Code Vice-Chair; compute governance
Allan DafoePresident (now at DeepMind)Yale PhD; Founded GovAI 2018Foundational research agenda; field definition
Lennart HeimAdjunct Fellow (at RAND)Technical AI policyCompute governance lead; OECD expert group

Alumni Placements

GovAI's impact extends through its alumni network, which now spans the AI governance ecosystem:

SectorOrganizationsSignificance
Frontier LabsDeepMind, OpenAI, AnthropicPolicy and governance roles
GovernmentUK Cabinet Office, US OSTP, EU AI OfficeDirect policy influence
Think TanksCSET, RAND, CNASResearch leadership
AcademiaOxford, CambridgeAcademic positions

Key Publications

GovAI has published extensively in peer-reviewed venues and policy outlets. Their work is notable for bridging academic rigor with practical policy relevance.

Major Publications (2023-2025)

TitleYearAuthorsVenueImpact
Computing Power and the Governance of AI2024Anderljung, Heim, et al.GovAIFoundational compute governance reference
Safety Cases for Frontier AI2024GovAI/AISIGovAIFramework for demonstrating AI safety
Coordinated Pausing: An Evaluation-Based Scheme2024GovAIGovAIProposes pause mechanism for dangerous capabilities
Training Compute Thresholds2024Heim, KoesslerWhite paperInforms regulatory threshold-setting
Governing Through the Cloud2024Fist, Heim, et al.OxfordCloud provider regulatory role
Frontier AI Regulation2023GovAIGovAITiered regulatory framework proposal
Standards for AI Governance2023GovAIGovAIInternational standards analysis

Publication Venues

GovAI researchers have published in leading journals and conferences:

Venue TypeExamples
Academic journalsNature, Nature Machine Intelligence, Science, International Organization
CS conferencesNeurIPS, AAAI AIES, ICML
Policy outletsJournal of Strategic Studies

Policy Influence

GovAI's influence operates through multiple channels: direct government advisory, regulatory participation, talent placement, and intellectual framework-setting.

Direct Policy Engagement (2024-2025)

EngagementRoleSignificance
EU GPAI Code of PracticeVice-Chair (Anderljung)Drafting Safety & Security chapter for AI Act implementation
UK Cabinet OfficeSecondment (Anderljung, past)Senior AI Policy Specialist
US AI Safety Institute ConsortiumMember organizationContributing to US AI safety standards
OECD AI Expert GroupMember (Heim)AI Compute and Climate

Framework Influence

GovAI's conceptual frameworks have shaped regulatory thinking:

FrameworkAdoption
Compute governanceReferenced in EU AI Act (10^25 FLOP threshold); US Executive Order
Tiered frontier regulationInforms UK, EU, US approaches to frontier AI
Safety casesAdopted by UK AISI as assessment framework

Comparison with Peer Organizations

OrganizationFocusSizeBudgetPolicy Access
GovAIAI governance research + field building≈40-45≈$1MHigh (EU, UK, US)
CSET (Georgetown)Security + emerging tech≈50≈$10M+High (US focus)
RAND AIBroad AI policy≈30≈$1M+High (US focus)
Oxford AI GovernanceAcademic research≈10≈$1MMedium

GovAI is distinctive for combining research depth with direct regulatory participation—particularly through Anderljung's Vice-Chair role in EU AI Act implementation.


Funding

GovAI is primarily funded by Coefficient Giving, which has provided substantial support for AI governance work.

GrantYearAmountPurpose
General Support2024$1,800,000Core operations
General Support2023$1,000,000Core operations
Field Building2021$141,613Fellowship programs

Strategic Assessment

Strengths

GovAI occupies a distinctive niche: producing rigorous, policy-relevant research while maintaining direct access to regulatory processes. Key strengths include:

  1. Compute governance expertise: Arguably the leading research group on this topic globally
  2. Talent pipeline: Fellowship program has trained significant portion of AI governance workforce
  3. Policy access: Direct participation in EU AI Act implementation; alumni in key government roles
  4. Academic credibility: Publications in top venues; Oxford affiliation (historical)

Limitations

  1. Funding concentration: Heavy reliance on Coefficient Giving creates potential vulnerability
  2. Geographic focus: Primarily UK/US/EU; limited Global South engagement
  3. Implementation gap: Research excellence doesn't always translate to implementation capacity
  4. Scale constraints: Small team relative to policy influence ambitions

Key Uncertainties

QuestionSignificance
Will compute governance prove tractable?GovAI's signature bet
EU AI Act implementation successTest of direct policy influence
Talent pipeline sustainabilityCentral to long-term impact
Funding diversificationReduces single-funder risk

Sources

References

The Centre for the Governance of AI (GovAI) is a leading research organization dedicated to helping decision-makers navigate the transition to a world with advanced AI. It produces rigorous research on AI governance, policy, and societal impacts, while fostering a global talent pipeline for responsible AI oversight. GovAI bridges technical AI safety concerns with practical policy recommendations.

★★★★☆

Structured Data

26 facts·27 recordsView in FactBase →
Revenue
$5.7 million
as of 2024
Headcount
40–45
as of 2025
Total Funding Raised
$13.3 million
as of 2025
Founded Date
2016

Key People

2
AK
Anton Korinek
Economics of AI Lead
BG
Ben Garfinkel
Director · 2021–present

All Facts

26
Organization
PropertyValueAs OfSource
Founded Date2016
CountryUnited Kingdom
Financial
PropertyValueAs OfSource
Grant Received$3.0 millionFeb 2025
3 earlier values
2025$756,000
2024$2.5 million
2023$2.8 million
Headcount40–452025
Total Funding Raised$13.3 million2025
Revenue$5.7 million2024
Annual Expenses$919,6612024
Product
PropertyValueAs OfSource
Publication Count502025
Political
PropertyValueAs OfSource
Lobbying Spend$0May 2024
General
PropertyValueAs OfSource
Websitehttps://governance.ai
Other
PropertyValueAs OfSource
Advisory BoardAjeya Cotra, Allan Dafoe, Helen Toner, Tasha McCauley, Toby Ord2025
ProgramGovAI Fellowship — competitive research fellowship program bringing early-career researchers to work on AI governance for 3-12 months. 100+ alumni placed across DeepMind, OpenAI, Anthropic, government agencies, and think tanks.2025
Legal IdentifierUK Companies House #15883729 (Company Limited by Guarantee)Aug 2024
PublicationComputing Power and the Governance of Artificial Intelligence — argues compute is the most governable AI pillar, proposes international monitoring mechanismsFeb 2024
3 earlier values
2024Safety Cases for Frontier AI — argues for structured safety arguments analogous to safety cases in other high-risk industries
2024Risk Thresholds for Frontier AI — proposes framework for when frontier AI capabilities warrant regulatory intervention
Jul 2023Frontier AI Regulation: Managing Emerging Risks to Public Safety — proposes three regulatory building blocks: standards, registration/reporting, compliance mechanisms
Policy InfluenceVice-Chair role on the EU GPAI Code of Practice drafting process (2024-2025)2024
Key PersonMarkus Anderljung2024
1 earlier value
2024Ben Garfinkel
Independence Date2021

Divisions

2
NameDivisionTypeStatus
GovAI Policyteamactive
GovAI Researchteamactive

Publications

23
TitlePublicationTypeAuthorsUrlPublishedDateIsFlagship
Frontier AI Auditing: Toward Rigorous Third-Party AssessmentpaperBrundage, Dreksler, Homewood, McGregor et al.governance.ai2026-01
Forecasting LLM-Enabled Biorisk and the Efficacy of SafeguardspaperWilliams, Righetti, Rosenberg et al.governance.ai2025-07
Third-Party Compliance Reviews for Frontier AI Safety FrameworkspaperHomewood, Williams, Dreksler, Lidiard, Garfinkel, Schuett et al.governance.ai2025-05
Infrastructure for AI AgentspaperChan, Wei, Huang, Rajkumar, Perrier, Lazar, Hadfield, Anderljunggovernance.ai2025-01
IDs for AI SystemspaperChan, Kolt, Wills, Anwar, Schroeder de Witt, Rajkumar, Hammond, Krueger, Heim, Anderljunggovernance.ai2024-10
Safety Cases for Frontier AIpaperBuhl, Sett, Koessler, Schuett, Anderljunggovernance.ai2024-10
A Grading Rubric for AI Safety FrameworkspaperAlaga, Schuett, Anderljunggovernance.ai2024-09
From Principles to Rules: A Regulatory Approach for Frontier AIpaperSchuett, Anderljung, Carlier, Koessler, Garfinkelgovernance.ai2024-08
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of LLMspaperEloundou, Manning, Mishkin, Rockgovernance.ai2024-06
Visibility into AI AgentspaperChan, Ezell, Kaufmann, Wei, Hammond, Bradley, Bluemke, Rajkumar, Krueger, Kolt, Heim, Anderljunggovernance.ai2024-06
Risk Thresholds for Frontier AIpaperKoessler, Schuett, Anderljunggovernance.ai2024-06
Societal Adaptation to Advanced AIpaperBernardi, Mukobi, Greaves, Heim, Anderljunggovernance.ai2024-05

Related Wiki Pages

Top Related Pages

Approaches

Structured Access / API-OnlyAI Governance Research and Analysis

Analysis

AI Safety Intervention Effectiveness MatrixAI Risk Interaction Matrix

Policy

China AI Regulatory Framework

Risks

AI Development Racing DynamicsAI Proliferation

Organizations

Future of Humanity InstituteGeorgetown CSETUS AI Safety InstituteFAR AICentre for Effective AltruismAnthropic

Concepts

Compute Thresholds

Other

Lennart HeimAllan DafoeMarkus Anderljung

Key Debates

AI Accident Risk Cruxes

Historical

International AI Safety Summit Series