GovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and currently holds Vice-Chair position in EU GPAI Code drafting. Their compute governance research has influenced regulatory thresholds across US, UK, and EU, with alumni now occupying key positions in frontier labs, think tanks, and government.
GovAI
GovAI
GovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and currently holds Vice-Chair position in EU GPAI Code drafting. Their compute governance research has influenced regulatory thresholds across US, UK, and EU, with alumni now occupying key positions in frontier labs, think tanks, and government.
GovAI
GovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and currently holds Vice-Chair position in EU GPAI Code drafting. Their compute governance research has influenced regulatory thresholds across US, UK, and EU, with alumni now occupying key positions in frontier labs, think tanks, and government.
Overview
The Centre for the Governance of AI (GovAI) is one of the most influential AI policy research organizations globally, combining rigorous research with direct policy engagement at the highest levels. Originally founded as part of the Future of Humanity InstituteOrganizationFuture of Humanity InstituteThe Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving...Quality: 51/100 at Oxford, GovAI became an independent nonprofit in 2023 when FHI closed, and subsequently relocated to London in 2024 to enhance its policy engagement capabilities.
GovAI's theory of impact centers on producing foundational research that shapes how governments and industry approach AI governanceParameterAI GovernanceThis page contains only component imports with no actual content - it displays dynamically loaded data from an external source that cannot be evaluated., while simultaneously training the next generation of AI governance professionals. Their 2018 research agenda helped define the nascent field of AI governance, and their subsequent work on compute governance has become a cornerstone of regulatory thinking in the US, UK, and EU. The organization receives substantial support from Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100, with grants totaling over $1.8 million in 2023-2024 alone.
The organization's influence extends beyond research: GovAI alumni now occupy key positions across the AI governance landscape—in frontier AI labs (Google DeepMindOrganizationGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100, OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ..., AnthropicOrganizationAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...), major think tanks (CSETOrganizationCSET (Center for Security and Emerging Technology)CSET is a $100M+ Georgetown center with 50+ staff conducting data-driven AI policy research, particularly on U.S.-China competition and export controls. The center conducts hundreds of annual gover...Quality: 43/100, RAND), and government positions in the US, UK, and EU. Perhaps most significantly, GovAI's Director of Policy Markus Anderljung currently serves as Vice-Chair of the EU's General-Purpose AI Code of Practice drafting process, directly shaping how the world's first comprehensive AI law will be implemented.
Organization Profile
| Attribute | Details |
|---|---|
| Founded | 2018 (as part of FHI); Independent 2023 |
| Location | London, UK (moved from Oxford in 2024) |
| Structure | Independent nonprofit |
| Staff Size | ≈15-20 researchers and staff |
| Annual Budget | ≈$1-4M (estimated from grants) |
| Primary Funder | Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 ($1.8M+ in 2023-2024) |
| Affiliations | US AI Safety InstituteOrganizationUS AI Safety InstituteThe US AI Safety Institute (AISI), established November 2023 within NIST with $10M budget (FY2025 request $82.7M), conducted pre-deployment evaluations of frontier models through MOUs with OpenAI a...Quality: 91/100 Consortium member |
Key Metrics
| Metric | Value | Notes |
|---|---|---|
| Publications in peer-reviewed venues | 50+ | Nature, Science, NeurIPS, International Organization |
| Fellowship alumni placed | 100+ | Since 2018 |
| Government advisory engagements | UK, US, EU | Direct policy input |
| Current policy roles | EU GPAI Code Vice-Chair | Markus Anderljung |
Research Areas
GovAI's research spans four interconnected domains, with particular depth in compute governance where they have produced foundational work cited by policymakers globally.
Compute Governance
GovAI's signature contribution is the compute governance framework—the idea that computing power, unlike data or algorithms, is physical, measurable, and therefore governable. Their February 2024 paper "Computing Power and the Governance of AI" (Anderljung, Heim, et al.) has become the definitive reference, cited in policy discussions from Washington to Brussels.
| Research Stream | Key Papers | Policy Impact |
|---|---|---|
| Compute thresholds | Training Compute Thresholds (2024) | Informed EU 10^25 FLOP threshold |
| Cloud governance | Governing Through the Cloud (2024) | Know-Your-Customer proposals |
| Hardware controls | Chip Tracking Mechanisms (2023) | Export control discussions |
| Verification | AI Verification (2023) | International monitoring concepts |
Lennart Heim, formerly GovAI's compute governance lead (now at RAND), regularly advises governments on implementation. His work demonstrates how compute provides a "governance surface"—a point where regulators can observe and influence AI development without requiring access to proprietary algorithms.
International Coordination
GovAI researches how nations can coordinate on AI governance despite competitive pressures. Their work on "AI Race Dynamics" examines why rational actors might collectively produce suboptimal outcomes, and what mechanisms might enable cooperation.
| Research Topic | Key Finding | Policy Relevance |
|---|---|---|
| Race dynamics | Competitive pressure degrades safety investments | Supports international coordination |
| Standards harmonization | Technical standards can enable verification | Informs AI safety summits |
| Information sharing | Incident reporting reduces collective risk | Model for international registries |
Frontier AI Regulation
Recent GovAI work focuses specifically on governing frontier AI—systems at or near the capability frontier that pose novel safety and security risks.
| Publication | Year | Contribution |
|---|---|---|
| Frontier AI Regulation: Managing Emerging Risks | 2023 | Proposed tiered regulatory framework |
| Safety Cases for Frontier AI | 2024 | Framework for demonstrating system safety |
| Coordinated Pausing Scheme | 2024 | Evaluation-based pause mechanism for dangerous capabilities |
GovAI collaborated with UK AISI on safety case sketches for offensive cyber capabilities, demonstrating practical application of their theoretical frameworks.
Field Building
GovAI runs competitive fellowship programs that have trained 100+ AI governance researchers since 2018. The fellowship provides mentorship from leading experts and has become a primary talent pipeline for the field.
Key People
GovAI's leadership combines academic rigor with policy experience. Several former team members have moved to positions of significant influence.
Current Leadership
Leadership Profiles
| Person | Role | Background | Notable Contributions |
|---|---|---|---|
| Ben Garfinkel | Director | DPhil Oxford (IR); Former OpenAI consultant | Sets organizational direction; security implications research |
| Markus Anderljung | Director of Policy | EY Sweden; UK Cabinet Office secondee | EU GPAI Code Vice-Chair; compute governance |
| Allan Dafoe | President (now at DeepMind) | Yale PhD; Founded GovAI 2018 | Foundational research agenda; field definition |
| Lennart Heim | Adjunct Fellow (at RAND) | Technical AI policy | Compute governance lead; OECD expert group |
Alumni Placements
GovAI's impact extends through its alumni network, which now spans the AI governance ecosystem:
| Sector | Organizations | Significance |
|---|---|---|
| Frontier Labs | DeepMind, OpenAI, Anthropic | Policy and governance roles |
| Government | UK Cabinet Office, US OSTP, EU AI Office | Direct policy influence |
| Think Tanks | CSET, RAND, CNAS | Research leadership |
| Academia | Oxford, Cambridge | Academic positions |
Key Publications
GovAI has published extensively in peer-reviewed venues and policy outlets. Their work is notable for bridging academic rigor with practical policy relevance.
Major Publications (2023-2025)
| Title | Year | Authors | Venue | Impact |
|---|---|---|---|---|
| Computing Power and the Governance of AI | 2024 | Anderljung, Heim, et al. | GovAI | Foundational compute governance reference |
| Safety Cases for Frontier AI | 2024 | GovAI/AISI | GovAI | Framework for demonstrating AI safety |
| Coordinated Pausing: An Evaluation-Based Scheme | 2024 | GovAI | GovAI | Proposes pause mechanism for dangerous capabilities |
| Training Compute Thresholds | 2024 | Heim, Koessler | White paper | Informs regulatory threshold-setting |
| Governing Through the Cloud | 2024 | Fist, Heim, et al. | Oxford | Cloud provider regulatory role |
| Frontier AI Regulation | 2023 | GovAI | GovAI | Tiered regulatory framework proposal |
| Standards for AI Governance | 2023 | GovAI | GovAI | International standards analysis |
Publication Venues
GovAI researchers have published in leading journals and conferences:
| Venue Type | Examples |
|---|---|
| Academic journals | Nature, Nature Machine Intelligence, Science, International Organization |
| CS conferences | NeurIPS, AAAI AIES, ICML |
| Policy outlets | Journal of Strategic Studies |
Policy Influence
GovAI's influence operates through multiple channels: direct government advisory, regulatory participation, talent placement, and intellectual framework-setting.
Direct Policy Engagement (2024-2025)
| Engagement | Role | Significance |
|---|---|---|
| EU GPAI Code of Practice | Vice-Chair (Anderljung) | Drafting Safety & Security chapter for AI Act implementation |
| UK Cabinet Office | Secondment (Anderljung, past) | Senior AI Policy Specialist |
| US AI Safety Institute Consortium | Member organization | Contributing to US AI safety standards |
| OECD AI Expert Group | Member (Heim) | AI Compute and Climate |
Framework Influence
GovAI's conceptual frameworks have shaped regulatory thinking:
| Framework | Adoption |
|---|---|
| Compute governance | Referenced in EU AI Act (10^25 FLOP threshold); US Executive Order |
| Tiered frontier regulation | Informs UK, EU, US approaches to frontier AI |
| Safety cases | Adopted by UK AISI as assessment framework |
Comparison with Peer Organizations
| Organization | Focus | Size | Budget | Policy Access |
|---|---|---|---|---|
| GovAI | AI governance research + field building | ≈20 | ≈$1M | High (EU, UK, US) |
| CSET (Georgetown) | Security + emerging tech | ≈50 | ≈$10M+ | High (US focus) |
| RAND AI | Broad AI policy | ≈30 | ≈$1M+ | High (US focus) |
| Oxford AI Governance | Academic research | ≈10 | ≈$1M | Medium |
GovAI is distinctive for combining research depth with direct regulatory participation—particularly through Anderljung's Vice-Chair role in EU AI Act implementation.
Funding
GovAI is primarily funded by Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100, which has provided substantial support for AI governance work.
| Grant | Year | Amount | Purpose |
|---|---|---|---|
| General Support | 2024 | $1,800,000 | Core operations |
| General Support | 2023 | $1,000,000 | Core operations |
| Field Building | 2021 | $141,613 | Fellowship programs |
Strategic Assessment
Strengths
GovAI occupies a distinctive niche: producing rigorous, policy-relevant research while maintaining direct access to regulatory processes. Key strengths include:
- Compute governance expertise: Arguably the leading research group on this topic globally
- Talent pipeline: Fellowship program has trained significant portion of AI governance workforce
- Policy access: Direct participation in EU AI Act implementation; alumni in key government roles
- Academic credibility: Publications in top venues; Oxford affiliation (historical)
Limitations
- Funding concentration: Heavy reliance on Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 creates potential vulnerability
- Geographic focus: Primarily UK/US/EU; limited Global South engagement
- Implementation gap: Research excellence doesn't always translate to implementation capacity
- Scale constraints: Small team relative to policy influence ambitions
Key Uncertainties
| Question | Significance |
|---|---|
| Will compute governance prove tractable? | GovAI's signature bet |
| EU AI Act implementation success | Test of direct policy influence |
| Talent pipeline sustainability | Central to long-term impact |
| Funding diversification | Reduces single-funder risk |