Longterm Wiki
Updated 2025-12-28HistoryData
Page StatusContent
Edited 7 weeks ago1.7k words5 backlinks
43
QualityAdequate
42
ImportanceReference
14
Structure14/15
1411107%8%
Updated every 3 weeksOverdue by 26 days
Summary

GovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and currently holds Vice-Chair position in EU GPAI Code drafting. Their compute governance research has influenced regulatory thresholds across US, UK, and EU, with alumni now occupying key positions in frontier labs, think tanks, and government.

Issues2
QualityRated 43 but structure suggests 93 (underrated by 50 points)
Links1 link could use <R> components

GovAI

Safety Org

GovAI

GovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and currently holds Vice-Chair position in EU GPAI Code drafting. Their compute governance research has influenced regulatory thresholds across US, UK, and EU, with alumni now occupying key positions in frontier labs, think tanks, and government.

TypeSafety Org
Related
Policies
Compute GovernanceInternational Coordination
Organizations
Google DeepMind
1.7k words · 5 backlinks
Safety Org

GovAI

GovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and currently holds Vice-Chair position in EU GPAI Code drafting. Their compute governance research has influenced regulatory thresholds across US, UK, and EU, with alumni now occupying key positions in frontier labs, think tanks, and government.

TypeSafety Org
Related
Policies
Compute GovernanceInternational Coordination
Organizations
Google DeepMind
1.7k words · 5 backlinks

Overview

The Centre for the Governance of AI (GovAI) is one of the most influential AI policy research organizations globally, combining rigorous research with direct policy engagement at the highest levels. Originally founded as part of the Future of Humanity Institute at Oxford, GovAI became an independent nonprofit in 2023 when FHI closed, and subsequently relocated to London in 2024 to enhance its policy engagement capabilities.

GovAI's theory of impact centers on producing foundational research that shapes how governments and industry approach AI governance, while simultaneously training the next generation of AI governance professionals. Their 2018 research agenda helped define the nascent field of AI governance, and their subsequent work on compute governance has become a cornerstone of regulatory thinking in the US, UK, and EU. The organization receives substantial support from Coefficient Giving, with grants totaling over $1.8 million in 2023-2024 alone.

The organization's influence extends beyond research: GovAI alumni now occupy key positions across the AI governance landscape—in frontier AI labs (Google DeepMind, OpenAI, Anthropic), major think tanks (CSET, RAND), and government positions in the US, UK, and EU. Perhaps most significantly, GovAI's Director of Policy Markus Anderljung currently serves as Vice-Chair of the EU's General-Purpose AI Code of Practice drafting process, directly shaping how the world's first comprehensive AI law will be implemented.

Organization Profile

AttributeDetails
Founded2018 (as part of FHI); Independent 2023
LocationLondon, UK (moved from Oxford in 2024)
StructureIndependent nonprofit
Staff Size≈15-20 researchers and staff
Annual Budget≈$1-4M (estimated from grants)
Primary FunderCoefficient Giving ($1.8M+ in 2023-2024)
AffiliationsUS AI Safety Institute Consortium member

Key Metrics

MetricValueNotes
Publications in peer-reviewed venues50+Nature, Science, NeurIPS, International Organization
Fellowship alumni placed100+Since 2018
Government advisory engagementsUK, US, EUDirect policy input
Current policy rolesEU GPAI Code Vice-ChairMarkus Anderljung

Research Areas

GovAI's research spans four interconnected domains, with particular depth in compute governance where they have produced foundational work cited by policymakers globally.

Loading diagram...

Compute Governance

GovAI's signature contribution is the compute governance framework—the idea that computing power, unlike data or algorithms, is physical, measurable, and therefore governable. Their February 2024 paper "Computing Power and the Governance of AI" (Anderljung, Heim, et al.) has become the definitive reference, cited in policy discussions from Washington to Brussels.

Research StreamKey PapersPolicy Impact
Compute thresholdsTraining Compute Thresholds (2024)Informed EU 10^25 FLOP threshold
Cloud governanceGoverning Through the Cloud (2024)Know-Your-Customer proposals
Hardware controlsChip Tracking Mechanisms (2023)Export control discussions
VerificationAI Verification (2023)International monitoring concepts

Lennart Heim, formerly GovAI's compute governance lead (now at RAND), regularly advises governments on implementation. His work demonstrates how compute provides a "governance surface"—a point where regulators can observe and influence AI development without requiring access to proprietary algorithms.

International Coordination

GovAI researches how nations can coordinate on AI governance despite competitive pressures. Their work on "AI Race Dynamics" examines why rational actors might collectively produce suboptimal outcomes, and what mechanisms might enable cooperation.

Research TopicKey FindingPolicy Relevance
Race dynamicsCompetitive pressure degrades safety investmentsSupports international coordination
Standards harmonizationTechnical standards can enable verificationInforms AI safety summits
Information sharingIncident reporting reduces collective riskModel for international registries

Frontier AI Regulation

Recent GovAI work focuses specifically on governing frontier AI—systems at or near the capability frontier that pose novel safety and security risks.

PublicationYearContribution
Frontier AI Regulation: Managing Emerging Risks2023Proposed tiered regulatory framework
Safety Cases for Frontier AI2024Framework for demonstrating system safety
Coordinated Pausing Scheme2024Evaluation-based pause mechanism for dangerous capabilities

GovAI collaborated with UK AISI on safety case sketches for offensive cyber capabilities, demonstrating practical application of their theoretical frameworks.

Field Building

GovAI runs competitive fellowship programs that have trained 100+ AI governance researchers since 2018. The fellowship provides mentorship from leading experts and has become a primary talent pipeline for the field.


Key People

GovAI's leadership combines academic rigor with policy experience. Several former team members have moved to positions of significant influence.

Current Leadership

Ben Garfinkel
Director
Markus Anderljung
Director of Policy & Research
Emma Bluemke
Research Manager
Anton Korinek
Economics of AI Lead

Leadership Profiles

PersonRoleBackgroundNotable Contributions
Ben GarfinkelDirectorDPhil Oxford (IR); Former OpenAI consultantSets organizational direction; security implications research
Markus AnderljungDirector of PolicyEY Sweden; UK Cabinet Office secondeeEU GPAI Code Vice-Chair; compute governance
Allan DafoePresident (now at DeepMind)Yale PhD; Founded GovAI 2018Foundational research agenda; field definition
Lennart HeimAdjunct Fellow (at RAND)Technical AI policyCompute governance lead; OECD expert group

Alumni Placements

GovAI's impact extends through its alumni network, which now spans the AI governance ecosystem:

SectorOrganizationsSignificance
Frontier LabsDeepMind, OpenAI, AnthropicPolicy and governance roles
GovernmentUK Cabinet Office, US OSTP, EU AI OfficeDirect policy influence
Think TanksCSET, RAND, CNASResearch leadership
AcademiaOxford, CambridgeAcademic positions

Key Publications

GovAI has published extensively in peer-reviewed venues and policy outlets. Their work is notable for bridging academic rigor with practical policy relevance.

Major Publications (2023-2025)

TitleYearAuthorsVenueImpact
Computing Power and the Governance of AI2024Anderljung, Heim, et al.GovAIFoundational compute governance reference
Safety Cases for Frontier AI2024GovAI/AISIGovAIFramework for demonstrating AI safety
Coordinated Pausing: An Evaluation-Based Scheme2024GovAIGovAIProposes pause mechanism for dangerous capabilities
Training Compute Thresholds2024Heim, KoesslerWhite paperInforms regulatory threshold-setting
Governing Through the Cloud2024Fist, Heim, et al.OxfordCloud provider regulatory role
Frontier AI Regulation2023GovAIGovAITiered regulatory framework proposal
Standards for AI Governance2023GovAIGovAIInternational standards analysis

Publication Venues

GovAI researchers have published in leading journals and conferences:

Venue TypeExamples
Academic journalsNature, Nature Machine Intelligence, Science, International Organization
CS conferencesNeurIPS, AAAI AIES, ICML
Policy outletsJournal of Strategic Studies

Policy Influence

GovAI's influence operates through multiple channels: direct government advisory, regulatory participation, talent placement, and intellectual framework-setting.

Direct Policy Engagement (2024-2025)

EngagementRoleSignificance
EU GPAI Code of PracticeVice-Chair (Anderljung)Drafting Safety & Security chapter for AI Act implementation
UK Cabinet OfficeSecondment (Anderljung, past)Senior AI Policy Specialist
US AI Safety Institute ConsortiumMember organizationContributing to US AI safety standards
OECD AI Expert GroupMember (Heim)AI Compute and Climate

Framework Influence

GovAI's conceptual frameworks have shaped regulatory thinking:

FrameworkAdoption
Compute governanceReferenced in EU AI Act (10^25 FLOP threshold); US Executive Order
Tiered frontier regulationInforms UK, EU, US approaches to frontier AI
Safety casesAdopted by UK AISI as assessment framework

Comparison with Peer Organizations

OrganizationFocusSizeBudgetPolicy Access
GovAIAI governance research + field building≈20≈$1MHigh (EU, UK, US)
CSET (Georgetown)Security + emerging tech≈50≈$10M+High (US focus)
RAND AIBroad AI policy≈30≈$1M+High (US focus)
Oxford AI GovernanceAcademic research≈10≈$1MMedium

GovAI is distinctive for combining research depth with direct regulatory participation—particularly through Anderljung's Vice-Chair role in EU AI Act implementation.


Funding

GovAI is primarily funded by Coefficient Giving, which has provided substantial support for AI governance work.

GrantYearAmountPurpose
General Support2024$1,800,000Core operations
General Support2023$1,000,000Core operations
Field Building2021$141,613Fellowship programs

Strategic Assessment

Strengths

GovAI occupies a distinctive niche: producing rigorous, policy-relevant research while maintaining direct access to regulatory processes. Key strengths include:

  1. Compute governance expertise: Arguably the leading research group on this topic globally
  2. Talent pipeline: Fellowship program has trained significant portion of AI governance workforce
  3. Policy access: Direct participation in EU AI Act implementation; alumni in key government roles
  4. Academic credibility: Publications in top venues; Oxford affiliation (historical)

Limitations

  1. Funding concentration: Heavy reliance on Coefficient Giving creates potential vulnerability
  2. Geographic focus: Primarily UK/US/EU; limited Global South engagement
  3. Implementation gap: Research excellence doesn't always translate to implementation capacity
  4. Scale constraints: Small team relative to policy influence ambitions

Key Uncertainties

QuestionSignificance
Will compute governance prove tractable?GovAI's signature bet
EU AI Act implementation successTest of direct policy influence
Talent pipeline sustainabilityCentral to long-term impact
Funding diversificationReduces single-funder risk

Sources

Related Pages

Top Related Pages

People

Yoshua BengioDan Hendrycks

Approaches

AI Safety CasesAI Governance Coordination Technologies

Analysis

Short AI Timeline Policy ImplicationsOpenAI Foundation Governance Paradox

Policy

Compute Monitoring

Transition Model

International CoordinationGeopoliticsCoordination CapacityLab Behavior

Concepts

Governance-Focused WorldviewAnthropic

Organizations

US AI Safety InstituteUK AI Safety Institute

Risks

AI Proliferation

Models

AI Regulatory Capacity Threshold ModelInternational AI Coordination Game Model

Key Debates

Government Regulation vs Industry Self-Governance

Historical

AI Safety Summit (Bletchley Park)