GovAI helps decision-makers navigate the transition to a world with advanced AI, by producing rigorous research and fostering talent." name="description"/><meta content="GovAI | Home
governmentCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Centre for the Governance of AI
GovAI is one of the most prominent AI governance research organizations globally; their publications on AI policy, international coordination, and existential risk governance are frequently cited in AI safety literature and policy discussions.
Metadata
Summary
The Centre for the Governance of AI (GovAI) is a leading research organization dedicated to helping decision-makers navigate the transition to a world with advanced AI. It produces rigorous research on AI governance, policy, and societal impacts, while fostering a global talent pipeline for responsible AI oversight. GovAI bridges technical AI safety concerns with practical policy recommendations.
Key Points
- •Produces research on AI governance challenges including international coordination, AI policy, and the societal implications of advanced AI systems.
- •Works with governments, international bodies, and other decision-makers to translate AI safety research into actionable policy frameworks.
- •Focuses on long-term and existential risks from advanced AI, including issues of power concentration and maintaining human oversight.
- •Fosters talent development in AI governance through fellowships, research programs, and community building.
- •Bridges the gap between technical AI safety research and political/institutional decision-making processes.
Review
Cited by 17 pages
| Page | Type | Quality |
|---|---|---|
| Long-Horizon Autonomous Tasks | Capability | 65.0 |
| AI Safety Solution Cruxes | Crux | 65.0 |
| AI Safety Defense in Depth Model | Analysis | 69.0 |
| International AI Coordination Game Model | Analysis | 59.0 |
| AI Safety Intervention Effectiveness Matrix | Analysis | 73.0 |
| Intervention Timing Windows | Analysis | 72.0 |
| Mesa-Optimization Risk Analysis | Analysis | 61.0 |
| AI Safety Researcher Gap Model | Analysis | 67.0 |
| Alignment Research Center | Organization | 57.0 |
| Conjecture | Organization | 37.0 |
| GovAI | Organization | 43.0 |
| Survival and Flourishing Fund | Organization | 59.0 |
| Dario Amodei | Person | 41.0 |
| AI Governance and Policy | Crux | 66.0 |
| Pause / Moratorium | Concept | 72.0 |
| Compute Thresholds | Concept | 91.0 |
| Governance-Focused Worldview | Concept | 67.0 |
2 FactBase facts citing this source
Cached Content Preview
### OUR WORK HAS BEEN FEATURED IN:
[](https://fortune.com/2024/04/04/ai-training-costs-how-much-is-too-much-openai-gpt-anthropic-microsoft/)[](https://www.economist.com/science-and-technology/2023/06/21/the-bigger-is-better-approach-to-ai-is-running-out-of-road)[](https://foreignpolicy.com/2023/10/19/biden-china-semiconductor-chip-industry-regulations-sanctions/)[](https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2)[](https://www.theatlantic.com/technology/archive/2023/10/technology-exports-ai-programs-regulations-china/675605/)
### Stay in touch
#### Sign up to receive our annual report and rare updates about programmes and opportunities
#### Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
.webp)
# Featured Analysis and Updates
[**Economics**\\
\\
**Export Controls and Export Promotion**\\
\\
Washington has made it clear that retaining AI dominance over China is both an economic and national security imperative. In November 2024, the U.S.-China Economic and Security Review Commission recommended...](https://www.governance.ai/research-paper/export-controls-and-export-promotion)
[\\
\\
**Survey Research**\\
\\
**What Does the Public Think About AI?**\\
\\
Drawing from academic studies and public polling data, this report synthesises public attitudes towards AI with a focus on the United Kingdom and United States. It discusses public views on issues such as concern about job loss...](https://www.governance.ai/research-paper/what-does-the-public-think-about-ai)
[**Technical AI Governance**\\
\\
**Trends in Frontier AI Model Count: A Forecast to 2028**\\
\\
Governments are starting to impose requirements on AI models based on how much compute was used to train them. For example, the EU AI Act imposes...](https://www.governance.ai/research-paper/trends-in-frontier-ai-model-count-a-forecast-to-2028)
[**Research Posts**\\
\\
**Predicting AI’s Impact on Work**\\
\\
If policymakers could foresee how AI will impact labour markets, they could make better policy decisions. The nascent science of “automation evaluations” aims
... (truncated, 4 KB total)f35c467b353f990f | Stable ID: YTgxMDMzYT