GovAI
Safety OrganizationThe Centre for the Governance of AI (GovAI) is a research organization focused on AI policy and governance. Originally part of the Future of Humanity Institute at Oxford, GovAI began operating independently in 2021 and formally incorporated as a separate entity before FHI closed in April 2024. Led by Director Ben Garfinkel with Markus Anderljung directing policy and research.
Facts
17Other Data
| Title | PublicationType | Authors | Url | PublishedDate | IsFlagship | Venue |
|---|---|---|---|---|---|---|
| Frontier AI Auditing: Toward Rigorous Third-Party Assessment | paper | Brundage, Dreksler, Homewood, McGregor et al. | governance.ai | 2026-01 | — | — |
| Forecasting LLM-Enabled Biorisk and the Efficacy of Safeguards | paper | Williams, Righetti, Rosenberg et al. | governance.ai | 2025-07 | — | — |
| Third-Party Compliance Reviews for Frontier AI Safety Frameworks | paper | Homewood, Williams, Dreksler, Lidiard, Garfinkel, Schuett et al. | governance.ai | 2025-05 | — | — |
| Infrastructure for AI Agents | paper | Chan, Wei, Huang, Rajkumar, Perrier, Lazar, Hadfield, Anderljung | governance.ai | 2025-01 | — | — |
| IDs for AI Systems | paper | Chan, Kolt, Wills, Anwar, Schroeder de Witt, Rajkumar, Hammond, Krueger, Heim, Anderljung | governance.ai | 2024-10 | — | — |
| Safety Cases for Frontier AI | paper | Buhl, Sett, Koessler, Schuett, Anderljung | governance.ai | 2024-10 | ✓ | — |
| A Grading Rubric for AI Safety Frameworks | paper | Alaga, Schuett, Anderljung | governance.ai | 2024-09 | — | — |
| From Principles to Rules: A Regulatory Approach for Frontier AI | paper | Schuett, Anderljung, Carlier, Koessler, Garfinkel | governance.ai | 2024-08 | — | — |
| GPTs are GPTs: An Early Look at the Labor Market Impact Potential of LLMs | paper | Eloundou, Manning, Mishkin, Rock | governance.ai | 2024-06 | ✓ | — |
| Visibility into AI Agents | paper | Chan, Ezell, Kaufmann, Wei, Hammond, Bradley, Bluemke, Rajkumar, Krueger, Kolt, Heim, Anderljung | governance.ai | 2024-06 | ✓ | — |
| Risk Thresholds for Frontier AI | paper | Koessler, Schuett, Anderljung | governance.ai | 2024-06 | — | — |
| Societal Adaptation to Advanced AI | paper | Bernardi, Mukobi, Greaves, Heim, Anderljung | governance.ai | 2024-05 | — | — |
| Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation | paper | Heim, Fist, Egan, Huang, Zekany, Trager, Osborne, Zilberman | governance.ai | 2024-03 | ✓ | — |
| Computing Power and the Governance of Artificial Intelligence | paper | Sastry, Heim, Anderljung et al. | arxiv.org | 2024-02 | ✓ | arXiv |
| Computing Power and the Governance of AI | paper | Lennart Heim et al. | governance.ai | 2024-02 | ✓ | — |
| What Should Be Internationalised in AI Governance? | paper | Robert Trager, Ben Garfinkel, et al. | governance.ai | 2024 | — | — |
| Frontier AI Regulation: Managing Emerging Risks to Public Safety | paper | Markus Anderljung, Joslyn Barnhart, Anton Korinek, et al. | governance.ai | 2023-11 | ✓ | — |
| Three Lines of Defense Against Risks from AI | paper | Jonas Schuett | governance.ai | 2023-10 | — | — |
| Open-Sourcing Highly Capable Foundation Models | paper | Elizabeth Seger, Noemi Dreksler, Richard Moulange, et al. | governance.ai | 2023-09 | — | — |
| International Governance of Civilian AI: A Jurisdictional Certification Approach | paper | Trager, Harack, Reuel, Carnegie, Heim, Ho et al. | governance.ai | 2023-08 | — | — |
| Frontier AI Regulation: Managing Emerging Risks to Public Safety | paper | Anderljung et al. | arxiv.org | 2023-07 | ✓ | arXiv |
| Model Evaluation for Extreme Risks | paper | Shevlane, Farquhar, Garfinkel et al. | arxiv.org | 2023-05 | — | arXiv |
| Model Evaluation for Extreme Risks | paper | Toby Shevlane, Sebastian Farquhar, Ben Garfinkel, et al. | arxiv.org | 2023-05 | ✓ | — |
Divisions
2Related Wiki Pages
Top Related Pages
AI Governance and Policy
Comprehensive framework covering international coordination, national regulation, and industry standards.
Compute Governance
U.S. policies regulating advanced AI chip exports to manage AI development globally, particularly restrictions targeting China and coordination wit...
EU AI Act
The world's first comprehensive AI regulation, adopting a risk-based approach to regulate foundation models and general-purpose AI systems
Ben Garfinkel
Google DeepMind
Google's merged AI research lab behind AlphaGo, AlphaFold, and Gemini, formed from combining DeepMind and Google Brain in 2023 to compete with OpenAI