Governance-Focused Worldview
Governance-Focused Worldview
This worldview argues governance/coordination is the bottleneck for AI safety (not just technical solutions), estimating 10-30% P(doom) by 2100. Evidence includes: compute export controls reduced Huawei AI chip production 80-85%, 85% of DC AI lobbyists represent industry, US federal AI regulations doubled in 2024 (59 vs 29), and historical precedent shows technology governance can work (NPT prevented Kennedy's predicted 25-30 nuclear states, only 9 exist).
Quick Assessment
| Dimension | Assessment | Evidence |
|---|---|---|
| Core claim | Governance bottleneck exceeds technical bottleneck | 85% of DC AI lobbyists represent industry; labs face structural racing dynamics |
| Historical precedent | Strong | Nuclear treaties prevented proliferation; Montreal Protocol phased out CFCs; FDA approval process |
| Policy momentum | Accelerating | US federal agencies issued 59 AI regulations in 2024 (2x 2023); EU AI Act entered force August 2024 |
| International coordination | Feasible but challenging | US-China AI dialogue began May 2024; joint UN AI resolution passed June 2024 |
| Regulatory capture risk | Moderate to high | AI lobbying spending increased 141% in 2024; OpenAI increased lobbying 7x year-over-year |
| Compute governance | Most concrete lever | Export controls reduced Huawei's AI chip production by 80-85% vs. capacity |
| P(doom) range | 10-30% | Emphasis on policy and coordination as key levers for risk reduction |
Core belief: Whether alignment is technically tractable or not, the bottleneck is getting good solutions adopted. Governance, coordination, and institutional change are the key levers.
Expert Probability Estimates
Those adopting a governance-focused worldview typically estimate AI existential catastrophe probability in the 10-30% range by 2100, reflecting optimism that policy interventions can meaningfully reduce risk if implemented effectively:
| Expert/Source | Estimate | Reasoning |
|---|---|---|
| Governance-focused view | 10-30% | This range reflects the belief that existential risk is substantial but not inevitable given effective governance interventions. The lower bound assumes successful international coordination, meaningful compute governance, and institutional reforms that align AI development with safety priorities. The upper bound accounts for scenarios where regulatory capture undermines policy effectiveness, international coordination fails due to competitive dynamics, or governance efforts prove too slow relative to capability development. Unlike purely technical perspectives, this estimate emphasizes policy and coordination mechanisms as the primary levers for risk reduction, suggesting that even if alignment remains technically challenging, appropriate institutional structures can create the time and incentives needed to solve it. |
Key Links
| Source | Link |
|---|---|
| Official Website | youtube.com |
| Wikipedia | en.wikipedia.org |
Overview
The governance-focused worldview holds that the primary challenge isn't just solving alignment technically, but ensuring that solutions are actually implemented. Even with perfect technical solutions, competitive dynamics, institutional failures, or coordination problems could lead to catastrophe.
This perspective emphasizes that AI development doesn't happen in a vacuum. It's shaped by economic incentives, regulatory frameworks, international relations, corporate culture, and political will. The path to safe AI runs through these institutions.
Unlike pure technical optimism, governance-focused thinkers recognize that labs face competitive pressures that may override safety concerns. Unlike pure technical pessimism, they believe that shaping the development environment can significantly reduce risk.
The Governance Gap
Diagram (loading…)
flowchart TD
TECH[Technical Safety Research] --> GAP{Adoption Gap}
GAP --> DEPLOY[Deployed Systems]
COMP[Competitive Pressure] --> GAP
INTL[International Racing] --> GAP
ECON[Economic Incentives] --> GAP
GOV[Governance Interventions] --> BRIDGE[Bridge the Gap]
BRIDGE --> DEPLOY
GOV --> REG[Regulation]
GOV --> COORD[Coordination]
GOV --> STAND[Standards]
GOV --> COMP_GOV[Compute Governance]
style GAP fill:#ffcccc
style BRIDGE fill:#ccffcc
style GOV fill:#ccffcc
style DEPLOY fill:#ffffccThe governance perspective identifies a structural gap between safety research and adoption, driven by competitive dynamics that governance interventions must bridge.
Characteristic Beliefs
| Crux | Typical Governance-Focused Position |
|---|---|
| Timelines | Enough time for governance to matter |
| Alignment difficulty | Important but not the only factor |
| Coordination | Crucial and achievable |
| Lab incentives | Won't naturally prioritize safety enough |
| Policy effectiveness | Can meaningfully shape outcomes |
| International dynamics | Key to overall outcome |
| Public opinion | Matters for what's politically feasible |
| Corporate structure | Shapes what research gets done |
| P(doom) | 10-30% (varies) |
Key Distinctions
Not just technical: Governance-focused people believe technical solutions are necessary but not sufficient. The challenge is sociotechnical.
Not defeatist: Unlike doomers, they believe coordination and governance can work with enough effort and political will.
Not naive: Unlike pure optimists, they recognize that market incentives don't naturally lead to safety.
Pragmatic: Focus on actionable interventions in policy, institutions, and incentive structures.
Core Arguments
1. Deployment Is What Matters
Even perfect alignment research sitting in a paper helps no one if systems deployed in the real world are unaligned.
Key insight: The gap between "research exists" and "research is adopted" is where catastrophe likely occurs.
Examples:
- Labs might skip safety testing under competitive pressure
- International competitors might ignore safety standards
- First-movers might deploy before safety is verified
- Economic pressure might override safety concerns
2. Racing Dynamics Are Structural
Competition pushes safety aside:
Between labs: First to AGI captures enormous value, creating winner-take-all dynamics
Between countries: AI leadership brings military and economic advantages
Between researchers: Career incentives reward capability advances over safety
Between investors: Returns come from deployment, not safety research
These aren't about individual actors being reckless - they're structural problems requiring structural solutions.
3. Governance Has Historical Precedent
Technology governance has worked before, with measurable impact:
| Technology Domain | Key Intervention | Measurable Outcome |
|---|---|---|
| Nuclear weapons | NPT (1970) + IAEA verification | 9 nuclear states vs. Kennedy's predicted 25-30 by 1975 |
| CFCs | Montreal Protocol (1987) | 99% reduction in production; ozone layer recovering |
| Pharmaceuticals | FDA approval (1962 Kefauver-Harris) | Pre-market safety testing prevented thalidomide-scale disasters in US |
| Aviation | FAA regulations + ICAO standards | Fatal accidents: 0.07 per million flights (2023) vs. 5+ in 1950s |
| Biotechnology | Asilomar (1975) + NIH guidelines | No major recombinant DNA incidents in 50 years |
| Financial regulation | Dodd-Frank (2010) | Bank capital requirements increased 2-3x; stress testing institutionalized |
While imperfect, these show that governance can shape powerful technologies. The common pattern: early intervention during development, international coordination, and verifiable standards.
4. Policy Shapes What Research Happens
Regulation and funding influence the technical landscape:
- Safety requirements drive research toward robust solutions
- Compute governance changes what's feasible to develop
- Funding priorities determine which approaches get explored
- Disclosure requirements enable coordination
- Standards create benchmarks for progress
Policy isn't just reactive - it can proactively shape the technical trajectory.
5. Bottleneck Is Adoption, Not Invention
For many challenges, we know what to do - the question is whether we'll do it:
- Evals: We can run safety tests, but will labs use them?
- Red teaming: We can probe for failures, but will findings stop deployment?
- Interpretability: We can study model internals, but will opacity block deployment?
- Safety training: We can improve alignment techniques, but will cutting corners happen?
Governance closes the gap between "can" and "will."
Current Governance Landscape (2024-2025)
Recent developments demonstrate both momentum and challenges in AI governance:
Policy Activity
According to the 2025 Stanford AI Index↗🔗 web★★★★☆Stanford HAIStanford AI Index 2025An authoritative annual report widely cited in AI policy and safety discussions; useful for grounding claims about AI progress, investment trends, and governance developments with empirical data.The 2025 Stanford HAI AI Index Report provides a comprehensive annual survey of AI development across technical performance, economic investment, global competition, and respons...capabilitiesgovernancepolicyevaluation+3Source ↗, US federal agencies introduced 59 AI-related regulations in 2024—more than double 2023. Globally, legislative mentions of AI rose 21.3% across 75 countries since 2023, marking a ninefold increase since 2016.
| Jurisdiction | Key Development | Status |
|---|---|---|
| European Union | AI Act↗🔗 webEU AI Act – Official Resource HubThis is the primary information hub for the EU AI Act, the landmark 2024 EU regulation that sets legally binding rules for AI development and deployment across the European Union, directly relevant to AI safety governance and policy discussions.The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, establishing a risk-based classification system for AI applications. It imposes var...governancepolicyai-safetydeployment+4Source ↗ | Entered force August 2024; prohibited practices effective February 2025; full application August 2026 |
| United States | Executive Order 14110 | Issued October 2023; partially rescinded 2025; compute reporting thresholds debated |
| United Kingdom | AI Safety Institute | Established 2023; first joint evaluation with US AISI November 2024 |
| China | Global AI Governance Action Plan↗🏛️ governmentChina's Global AI Governance Action PlanThis is China's official government position on global AI governance, significant for understanding the geopolitical landscape of AI regulation and how major AI powers frame safety, sovereignty, and international cooperation norms.China's official AI governance framework, released at the 2025 World AI Conference, establishes principles for international cooperation treating AI as a global public good. It ...governancepolicyai-safetycoordination+2Source ↗ | Announced July 2025; 13-point roadmap for international coordination |
International Coordination Efforts
The RAND analysis on US-China AI cooperation↗🔗 web★★★★☆RAND CorporationRAND - Incentives for U.S.-China Conflict, Competition, and CooperationA RAND policy analysis relevant to AI governance researchers and policymakers concerned with how great-power competition shapes AGI development incentives and the prospects for international AI safety cooperation.This RAND report analyzes the strategic dynamics between the U.S. and China in the context of artificial general intelligence development, identifying key national security scen...governancepolicycoordinationexistential-risk+3Source ↗ identifies promising areas for dialogue despite competition:
- May 2024: First US-China intergovernmental AI dialogue in Geneva
- June 2024: UN General Assembly unanimously passed China-led AI cooperation resolution (US supported)
- November 2024: US-China agreement that humans, not AI, should make nuclear weapons decisions
- July 2025: China proposed global AI cooperation organization at WAIC
Regulatory Capture Risk
RAND research on AI regulatory capture↗🔗 web★★★★☆RAND CorporationRAND research on AI regulatory captureA RAND Corporation research brief relevant to AI governance discussions around regulatory independence; useful for understanding institutional risks in AI oversight design and policy debates about who should set AI safety standards.This RAND research brief examines the risk of regulatory capture in AI governance, where AI developers and industry actors may unduly influence the regulatory bodies meant to ov...governancepolicyai-safetydeployment+1Source ↗ and OpenSecrets lobbying data↗🔗 web★★★★☆OpenSecretsOpenSecrets lobbying dataUseful for understanding the political landscape around AI regulation; the scale of industry lobbying is relevant context for assessing how AI safety and governance policies are shaped and potentially constrained by commercial interests.OpenSecrets reports that lobbying activity related to artificial intelligence reached record levels in 2024, with a surge in companies and organizations seeking to influence AI-...governancepolicydeploymentcoordination+1Source ↗ reveal industry influence:
- 648 companies spent on AI lobbying in 2024 (vs. 458 in 2023, 141% increase)
- OpenAI increased lobbying spending 7x↗🔗 web★★★★☆MIT Technology ReviewOpenAI increased lobbying spending 7xRelevant to AI governance discussions about how frontier AI labs are influencing regulatory processes; useful background for understanding the political economy of AI safety policy in the U.S.OpenAI dramatically expanded its lobbying expenditures in 2024, increasing spending nearly sevenfold compared to the prior year. This reflects the company's growing efforts to s...governancepolicydeploymentai-safety+1Source ↗ ($1.76M in 2024 vs. $260K in 2023)
- 85% of DC AI lobbyists work for industry organizations
- Many former congressional tech staffers now lobby for AI companies
Key Organizations and Proponents
Research Organizations
Center for AI Safety (CAIS) - Policy Arm
- Focus on compute governance, international coordination
- Organizes stakeholder convenings
- Advises policymakers
GovAI (Governance of AI Program)
- Part of Oxford
- Academic research on AI governance
- Policy recommendations based on rigorous analysis
Center for AI Policy (CAIP)
- Direct policy advocacy
- Works with legislators on AI regulation
- Focus on US policy
Future of Humanity Institute (FHI)
- Long-term governance research
- Strategy and cooperation studies
Think Tanks
- RAND Corporation AI projects
- Center for Security and Emerging Technology (CSET)
- Various national security think tanks
Individual Voices
Allan Dafoe↗🔗 webAllan Dafoe - Personal HomepageAllan Dafoe is a central figure in AI governance; his homepage provides an overview of his roles, affiliations, and research directions across GovAI, Cooperative AI Foundation, and Google DeepMind.Personal homepage of Allan Dafoe, a leading figure in AI governance and safety. Dafoe is Senior Director of Frontier Strategy and Governance at Google DeepMind, founder of the C...governanceai-safetycoordinationpolicy+2Source ↗: Founder and former Director of GovAI↗🏛️ government★★★★☆Centre for the Governance of AIGovAI helps decision-makers navigate the transition to a world with advanced AI, by producing rigorous research and fostering talent." name="description"/><meta content="GovAI | HomeGovAI is one of the most prominent AI governance research organizations globally; their publications on AI policy, international coordination, and existential risk governance are frequently cited in AI safety literature and policy discussions.The Centre for the Governance of AI (GovAI) is a leading research organization dedicated to helping decision-makers navigate the transition to a world with advanced AI. It produ...governanceai-safetypolicyexistential-risk+4Source ↗, now Director of Frontier Safety and Governance at Google DeepMind. Author of the foundational AI Governance: A Research Agenda↗🏛️ government★★★★☆Centre for the Governance of AIAI Governance: A Research AgendaA widely-cited 2018 foundational document from GovAI that helped establish AI governance as a structured research field; useful starting point for anyone seeking to understand the landscape of AI governance research questions.Allan Dafoe's foundational 2018 paper for the Centre for the Governance of AI (GovAI) provides a structured framework for AI governance research, identifying pivotal questions a...governancepolicyai-safetycoordination+2Source ↗ (2018).
"The challenge isn't just building safe AI - it's building institutions that ensure AI is developed safely."
Jess Whittlestone: Research on AI ethics and governance at the Centre for Long-Term Resilience
Markus Anderljung: Work on compute governance and standards at GovAI; co-author of influential compute governance papers
Gillian Hadfield: Legal and institutional frameworks for AI; Professor at Johns Hopkins and Director of the Schwartz Reisman Institute
Helen Toner: Former OpenAI board member; Georgetown CSET research on international AI policy
Priority Approaches
Given governance-focused beliefs, key priorities include:
1. Governance and Policy
Domestic regulation:
- Safety testing requirements before deployment
- Mandatory incident reporting
- Audit and oversight mechanisms
- Liability frameworks
International coordination:
- Multilateral agreements on safety standards
- Information sharing on risks and incidents
- Coordinated restrictions on dangerous capabilities
- Verification mechanisms
Standards and certification:
- Industry safety standards
- Third-party auditing
- Transparency requirements
- Best practices codification
2. Compute Governance
Compute is a physical chokepoint that can be governed. RAND research↗🔗 web★★★★☆RAND CorporationRAND Perspectives on AI Safety and GovernanceThis RAND Corporation Perspectives paper likely covers AI governance or safety policy; content could not be retrieved, so metadata is based on RAND's publication format and URL pattern. Users should verify the specific topic directly.This RAND Corporation research publication (PEA3776-1) addresses policy and governance considerations related to artificial intelligence, likely examining risks, regulatory fram...governancepolicyai-safetyanalysisSource ↗ and analysis by the Council on Foreign Relations↗🔗 web★★★★☆Council on Foreign RelationsChina's AI Chip Deficit: Why Huawei Can't Catch NvidiaRelevant to debates on AI compute governance and export control policy; provides a geopolitical and technical assessment of the US-China AI chip competition from the Council on Foreign Relations.This CFR analysis examines the technological gap between Huawei's domestic AI chips and Nvidia's leading GPUs, arguing that China's semiconductor capabilities remain significant...computegovernancepolicycapabilities+1Source ↗ demonstrate measurable effects:
| Intervention | Implementation | Measured Effect |
|---|---|---|
| US chip export controls (Oct 2022) | Restricted advanced AI chips to China | Chinese stockpiling delayed impact; DeepSeek trained on pre-control chips |
| High-bandwidth memory controls (Dec 2024) | Added HBM to controlled items | Huawei projected 200-300K chips vs. 1.5M capacity (80-85% reduction) |
| SME equipment controls | Restricted lithography, etch, deposition | Chinese AI companies report 2-4x power consumption penalty |
| Dutch/Japanese coordination (2023) | Aligned export controls with US | 9-month enforcement delay enabled $5B stockpiling |
Supply chain interventions:
- Track production and distribution of AI chips
- Require reporting for large training runs (thresholds around 10^26 FLOP proposed)
- Restrict access to frontier compute
International coordination:
- Export controls on advanced chips
- Multilateral agreements on compute limits
- Verification of compliance
Advantages:
- Verifiable (large training runs require ~10,000+ GPUs, detectable via power consumption)
- Implementable (chip production concentrated: TSMC produces 90%+ of advanced chips)
- Effective (compute is necessary for frontier AI; cloud access can be revoked)
3. Lab Safety Culture
Change incentives and practices inside AI labs:
Institutional design:
- Safety-focused board structures
- Independent safety oversight
- Whistleblower protections
- Safety budgets and teams
Norms and culture:
- Reward safety work at parity with capabilities
- Safety reviews before deployment
- Conservative deployment decisions
- Open sharing of safety techniques
Talent and recruitment:
- Hire safety-minded researchers
- Train leadership on risk
- Build safety expertise
4. Evals and Standards
Create accountability through measurement:
Dangerous capability evaluations:
- Test for deception, situational awareness, autonomy
- Red teaming for misuse potential
- Benchmarks for alignment properties
Disclosure and transparency:
- Publish evaluation results
- Share safety incidents
- Document training procedures
Conditional deployment:
- Deploy only after passing evals
- Continuous monitoring post-deployment
- Rollback procedures for failures
5. International Coordination
Prevent race-to-the-bottom dynamics. Research from Oxford International Affairs↗🔗 web★★★★★Oxford Academic (peer-reviewed)Oxford International AffairsPublished in International Affairs (Oxford), a leading IR journal; relevant to wiki users interested in how mainstream international relations scholarship engages with AI governance, policy interventions, and global coordination challenges around advanced AI.This article from the journal International Affairs (Oxford) addresses AI governance and its implications for international security and global policy coordination. The piece li...governancepolicycoordinationai-safety+2Source ↗ and Brookings↗🔗 web★★★★☆Brookings InstitutionStrengthening international cooperation on artificial intelligence | BrookingsPublished by the Brookings Institution, a prominent policy think tank; relevant for understanding the geopolitical and institutional landscape of AI governance and international safety coordination efforts.This Brookings Institution article examines the challenges and mechanisms for building effective international cooperation on AI governance, arguing that coordinated global appr...governancepolicycoordinationai-safety+2Source ↗ analyzes pathways:
US-China cooperation:
- Scientist-to-scientist dialogue (Track 2)
- Government working groups (Geneva dialogue May 2024)
- Joint safety research on shared risks
- Mutual verification for compute thresholds
Multilateral frameworks:
- UN High-Level Advisory Body on AI (final report August 2024)
- Proposal for international AI agency↗🔗 web★★★★★Oxford Academic (peer-reviewed)Proposal for international AI agencyPublished in the prestigious International Affairs journal, this article contributes to the growing academic literature on international AI governance institutions, relevant to discussions about how the international community might coordinate on advanced AI oversight.This academic article published in International Affairs proposes the creation of an international agency dedicated to AI governance and safety oversight, drawing on precedents ...governanceai-safetypolicycoordination+3Source ↗
- Bletchley Declaration (2023) and Seoul Frontier AI Safety Commitments (2024)
- G7 Hiroshima AI Process
Track 2 diplomacy:
- Academic and NGO engagement across borders
- Build relationships before crisis
- Establish communication channels
- Former Google CEO Eric Schmidt at WAIC 2025: "The United States and China should collaborate on these issues"
Deprioritized Approaches
Not that these are useless, but they're less central given governance-focused beliefs:
| Approach | Why Less Central |
|---|---|
| Agent foundations | Too theoretical, not immediately actionable |
| Pause advocacy | Prefer incremental governance to binary stop/go |
| Pure technical research | Useful but insufficient without adoption mechanisms |
| Individual lab efforts | Need structural change, not voluntary action |
Strongest Arguments
1. Technical Solutions Need Implementation Paths
Scenario: Researchers develop a breakthrough in alignment - robust interpretability that can detect deceptive AI.
Without governance: Labs might not use it because:
- It slows down development
- Competitors aren't using it
- It might reveal problems that block profitable deployment
- No regulatory requirement forces adoption
With governance: Requirements make adoption happen:
- Regulators mandate interpretability checks before deployment
- Standards bodies incorporate it into certification
- Liability frameworks penalize deployment without verification
- International agreements create level playing field
2. Market Failures in Safety
AI development exhibits classic market failures. Research on the economics of AI safety investment↗🔗 web★★★☆☆ResearchGateResearch on the economics of AI safety investmentAn early (2012) economics-focused thesis on AI safety investment gaps; useful for understanding the economic and policy framing of AI safety, though predates many modern alignment concerns and may have limited empirical grounding given its speculative forward-looking nature.This thesis examines AI safety investment through market failure theory, identifying how public goods characteristics, information asymmetries, coordination problems, and limite...ai-safetygovernancepolicycoordination+4Source ↗ identifies structural barriers even when safety investment would be socially optimal:
| Market Failure Type | Mechanism | Quantified Impact |
|---|---|---|
| Negative externalities | Individual actors bear safety costs, society bears risk | Estimated $10-100B+ in potential catastrophic externalities not priced |
| Public goods undersupply | Safety techniques can be copied | Safety research estimated at 2-5% of AI R&D vs. 10-20% optimal |
| Information asymmetry | Labs know more than regulators | Model cards cover less than 30% of safety-relevant properties |
| Competitive dynamics | First-mover advantage incentivizes rushing | Average time from research to deployment: 18 months (2020) to 6 months (2024) |
Externalities: Individual actors bear costs of safety but don't capture all benefits
- Lab that slows down loses competitive advantage
- Society bears risk of all actors' decisions
- First-mover advantage incentivizes rushing
Public goods: Safety research benefits everyone, so undersupplied
- Safety techniques can be copied
- Individual labs underinvest
- Coordination problem
Information asymmetry: Labs know more about their systems than society
- Can hide safety problems
- Regulators can't assess risk independently
- Public can't make informed decisions
Governance role: Correct these market failures through regulation, incentives, and information requirements.
3. Speed-Safety Tradeoffs Are Real
Organizations face genuine tradeoffs:
At labs:
- Thorough safety testing vs. fast iteration
- Open publication vs. competitive advantage
- Conservative deployment vs. market capture
- Safety talent vs. capability talent
At national level:
- Domestic safety rules vs. international competitiveness
- Beneficial applications now vs. safety later
- Economic growth vs. caution
Without governance, these tradeoffs systematically favor speed over safety.
4. Institutions Shape Technology
Historical pattern: Technology is shaped by the institutional context:
Nuclear weapons: International treaties and norms prevented proliferation scenarios that seemed inevitable in 1945
CFCs: Montreal Protocol phased out dangerous chemicals despite economic costs
Automotive safety: Regulations drove seat belts, airbags, crumple zones despite industry resistance
Pharmaceuticals: FDA approval process, for all its flaws, prevents many dangerous drugs
AI precedent: Social media shows what happens without governance - externalities dominate
5. Windows of Opportunity Close
Governance is easiest before deployment:
Pre-deployment:
- Can shape standards before lock-in
- Public is attentive to hypothetical risks
- Industry is more willing to coordinate
- International cooperation is feasible
Post-deployment:
- Massive economic interests resist change
- Coordination becomes harder
- Public may acclimate to risks
- Path dependency limits options
Current moment may be critical window for establishing governance.
Main Criticisms and Counterarguments
"Government Is Too Slow"
Critique: AI moves faster than government. Regulations will be obsolete before they're implemented.
Response:
- Principles-based regulation can be flexible
- Compute governance targets physical layer that changes slowly
- International norms and standards can move faster than formal regulation
- Even slow governance beats no governance
- Private governance (standards bodies) can complement public
"Regulatory Capture Is Inevitable"
Critique: Industry will capture regulators, resulting in theater without substance. Evidence from Nature↗📄 paper★★★★★Nature (peer-reviewed)Evidence from NatureNature article analyzing how EU AI Act regulatory influence may spread globally and how corporate lobbying shapes state-level AI regulations in the US, relevant to understanding AI governance fragmentation and regulatory effectiveness.The article examines how the EU's AI Act may trigger a 'Brussels effect,' where EU regulations influence AI governance globally and in US states, similar to how GDPR shaped data...Source ↗ shows AI companies have successfully weakened state-level AI legislation.
Response:
- Capture is a risk to manage, not a certainty—RAND proposes specific countermeasures↗🔗 web★★★★☆RAND CorporationRAND research on AI regulatory captureA RAND Corporation research brief relevant to AI governance discussions around regulatory independence; useful for understanding institutional risks in AI oversight design and policy debates about who should set AI safety standards.This RAND research brief examines the risk of regulatory capture in AI governance, where AI developers and industry actors may unduly influence the regulatory bodies meant to ov...governancepolicyai-safetydeployment+1Source ↗
- Multi-stakeholder processes reduce capture risk
- International competition limits capture (EU AI Act creates pressure)
- Public attention and advocacy create accountability
- Design institutions with capture resistance (independent oversight, transparency, mandatory disclosure of lobbying)
"International Coordination Is Impossible"
Critique: US-China rivalry makes cooperation impossible. Any governance will fail due to racing.
Response:
- Even adversarial nations cooperate on shared risks (nuclear, climate, pandemic)
- Scientists often cooperate even when governments compete
- Track 2 diplomacy can build foundations
- Racing doesn't help either side if both face existential risk
- Can build cooperation incrementally
"This Just Delays the Inevitable"
Critique: Governance might slow AI development but can't stop it. We're just postponing doom.
Response:
- Time to solve alignment has enormous value
- Shaping development trajectory matters even if we can't stop it
- Coordination could enable pause until safety is solved
- "Can't solve it permanently" doesn't mean "don't try"
"Overestimates Policy Effectiveness"
Critique: Policy is regularly ineffective. Look at climate, financial regulation, social media.
Response:
- Failures exist but so do successes (see examples above)
- AI may get more political attention than those issues
- Can learn from past failures to design better governance
- Partial success is better than no attempt
- Alternative is market failures with no correction
"Doesn't Address Fundamental Technical Problems"
Critique: Governance can't solve alignment if it's fundamentally unsolvable.
Response:
- Governance people don't claim it's sufficient alone
- Even if technical work is needed, adoption still requires governance
- Governance can buy time for technical solutions
- Can ensure technical solutions that exist get used
What Evidence Would Change This View?
Governance-focused people would update away from this worldview given:
Governance Failures
- Repeated ineffectiveness: Policies consistently having no impact
- Capture demonstrated: Industry fully capturing regulatory process
- International impossibility: Clear proof cooperation can't happen
- Backfire effects: Regulations consistently making things worse
Technical Developments
- Self-enforcing alignment: Technical solutions that work regardless of adoption
- Natural safety: Capability and alignment turn out to be linked
- Automatic detection: Systems that can't help but reveal misalignment
Empirical Evidence
- Market success: Labs voluntarily prioritizing safety without pressure
- Speed irrelevant: Very long timelines making urgency moot
- Technical bottleneck: Alignment clearly the bottleneck, not adoption
Implications for Action and Career
If you hold this worldview, prioritized actions include:
Policy Careers
Government:
- Work in relevant agencies (NIST, OSTP, DoD, State Department)
- Legislative staffer focused on AI
- International organization (UN, OECD)
Advocacy:
- AI safety advocacy organizations
- Think tanks and policy research
- Direct lobbying and education
Expertise building:
- Technical background + policy knowledge
- Understand both AI and governance
- Bridge between technical and policy communities
Research and Analysis
Academic research:
- AI governance studies
- International relations and cooperation
- Institutional design
- Science and technology policy
Applied research:
- Policy recommendations
- Institutional design proposals
- Coordination mechanisms
- Measurement and metrics
Industry and Lab Engagement
Internal reform:
- Safety governance roles at labs
- Board-level engagement
- Corporate governance consulting
Standards and best practices:
- Industry working groups
- Standards body participation
- Safety certification development
Communication and Field-Building
Public education:
- Explain AI governance to broader audiences
- Build political will for action
- Counter misconceptions
Community building:
- Connect policy and technical communities
- Facilitate dialogue between stakeholders
- Build coalitions for action
Internal Diversity
The governance-focused worldview includes significant variation:
Regulatory Philosophy
Heavy regulation: Comprehensive rules, strict enforcement, precautionary principle
Light-touch regulation: Principles-based, flexibility, market-friendly
Hybrid: Different approaches for different risks
International Focus
US-focused: Work within US system first
China-focused: Engage Chinese stakeholders
Multilateral: Build international institutions
Theory of Change
Top-down: Government regulation drives change
Bottom-up: Industry standards and norms
Multi-level: Combination of approaches
Risk Assessment
High-risk governance: Governance is urgent, major changes needed
Moderate-risk governance: Important but not emergency
Uncertainty-focused: Governance for unknown unknowns
Relationship to Other Worldviews
vs. Doomer
Agreements:
- Risk is real and substantial
- Current trajectory is concerning
- Coordination is important
Disagreements:
- Governance folks more optimistic about coordination
- Less focus on fundamental technical impossibility
- More emphasis on implementation than invention
vs. Optimistic
Agreements:
- Technical progress is possible
- Solutions can be found with effort
Disagreements:
- Optimists think market will provide safety
- Governance folks see market failures requiring intervention
- Different views on default outcomes
vs. Long-Timelines
Agreements:
- Have time for institutional change
- Can build careful solutions
Disagreements:
- Governance folks think shorter timelines still plausible
- More urgency about building institutions now
- Focus on current systems, not just future ones
Practical Considerations
What Success Looks Like
Near-term (1-3 years):
- Safety testing requirements for frontier models
- Compute governance framework established
- International dialogue mechanisms exist
- Industry safety standards emerging
Medium-term (3-10 years):
- Meaningful international coordination
- Verified compliance with safety standards
- Independent oversight functioning
- Safety competitive with capabilities
Long-term (10+ years):
- Robust governance for transformative AI
- International cooperation preventing races
- Safety culture deeply embedded
- Continuous adaptation to new challenges
Key Uncertainties
Political feasibility: Will there be political will for serious governance?
International cooperation: Can US-China find common ground?
Industry response: Will labs cooperate or resist?
Technical trajectory: Will governance be fast enough?
Public opinion: Will public support or oppose AI governance?
Representative Quotes
"We keep debating whether the AI itself will be aligned, but we're not asking whether the institutions building AI are aligned with humanity's interests." - Allan Dafoe
"Even if we solve alignment technically, we face the problem that the first actor to deploy doesn't face the full costs of getting it wrong. That's a market failure requiring governance." - Gillian Hadfield
"Compute governance isn't about stopping AI - it's about making sure we can see what's happening and coordinate our response." - Lennart Heim
"The challenge is that everyone in the room agrees we need more safety, but the incentives push them to cut corners anyway. That's a structural problem." - Helen Toner
"International cooperation on AI might seem impossible, but so did arms control during the Cold War. We need to build institutions for cooperation before crisis." - Governance researcher
Common Misconceptions
"Governance people want to stop AI": No, they want to shape development to be safe
"It's just bureaucrats slowing down innovation": Many are technically sophisticated and pro-innovation
"Governance is about current AI harms, not existential risk": Governance-focused safety people focus on both
"It's anti-competitive": Safety requirements can preserve competition while preventing races-to-the-bottom
"It's just about regulation": Also includes norms, standards, coordination, and institutions
Recommended Reading
Foundational Texts
- The Governance of AI↗🔗 web★★★★☆Future of Humanity InstituteThe Governance of AIThis is an archived FHI research agenda page; FHI has since closed, but this document remains a historically significant framing of AI governance as an academic and policy research domain from a leading existential risk institute.This page outlines the Future of Humanity Institute's research agenda on AI governance, examining how societies, institutions, and international bodies should oversee and regula...governancepolicycoordinationai-safety+3Source ↗ - FHI Research Agenda
- AI Governance: A Research Agenda↗📋 report★★★★☆Future of Humanity InstituteAI Governance: A Research AgendaA landmark agenda-setting document from FHI's Centre for the Governance of AI (now an independent organization), widely cited as a foundational reference for researchers entering the AI governance field.Published by the Centre for the Governance of AI at FHI Oxford, this foundational report outlines a comprehensive research agenda for AI governance, identifying key challenges a...governanceai-safetypolicyexistential-risk+4Source ↗ - Allan Dafoe↗🏛️ government★★★★☆Centre for the Governance of AIAI Governance: A Research AgendaA seminal 2018 document from GovAI/FHI that helped establish AI governance as a formal research field; widely cited as a foundational framing of the governance problem space and useful orientation for researchers entering the field.Allan Dafoe's foundational research agenda for the field of AI governance, outlining the problem space across three clusters: understanding the technical landscape, analyzing AI...governanceai-safetyexistential-riskpolicy+3Source ↗ (2018)
- Computing Power and the Governance of AI↗📄 paper★★★☆☆arXivComputing Power and the Governance of AIAnalyzes compute governance as a mechanism for AI safety and oversight, exploring how computing power constraints can enable visibility, resource allocation, and enforcement in AI development—a key governance approach for managing advanced AI systems.Sastry, Girish, Heim, Lennart, Belfield, Haydn et al. (2024)The paper explores how computing power can be used to enhance AI governance through visibility, resource allocation, and enforcement mechanisms. It examines the technical and po...governancecomputeSource ↗ - Girish Sastry et al.
Policy Analysis
- Intermediate AI Governance↗📋 report★★★☆☆MIRIIntermediate AI GovernanceThis MIRI-hosted PDF returns a 404 error and is inaccessible; the content cannot be verified. Users should seek an archived version or alternative source before relying on this link.This resource could not be retrieved, returning a 404 error. The document was presumably a report from the Machine Intelligence Research Institute (MIRI) on intermediate-term AI...governanceai-safetypolicycoordinationSource ↗ - Nick Bostrom
- Decoupling Deliberation and Deployment↗✏️ blog★★★☆☆Alignment ForumDecoupling Deliberation and DeploymentA 2018 essay by Paul Christiano probing underexplored moral questions about unaligned AI, relevant to long-termist ethics, AI welfare, and the philosophical foundations of why alignment matters.paulfchristiano (2018)Paul Christiano explores whether unaligned AI systems—those pursuing goals other than human values—might nonetheless deserve moral consideration and contribute to a good future....alignmentai-safetymoral-uncertaintyexistential-risk+3Source ↗
- Racing Through a Minefield↗✏️ blog★★★☆☆Alignment ForumRacing Through a MinefieldA conceptual piece by Eliezer Yudkowsky on the Alignment Forum using superstimuli as an analogy for understanding how market-deployed AI systems may systematically optimize against genuine human welfare, relevant to AI deployment governance debates.Eliezer Yudkowsky (2007)Yudkowsky argues that competitive market incentives systematically drive the creation of 'superstimuli'—products engineered to exploit evolved preferences so intensely they over...ai-safetyalignmentdeploymentgovernance+4Source ↗
- Global AI governance: barriers and pathways forward↗🔗 web★★★★★Oxford Academic (peer-reviewed)Oxford International AffairsPublished in International Affairs (Oxford), a leading IR journal; relevant to wiki users interested in how mainstream international relations scholarship engages with AI governance, policy interventions, and global coordination challenges around advanced AI.This article from the journal International Affairs (Oxford) addresses AI governance and its implications for international security and global policy coordination. The piece li...governancepolicycoordinationai-safety+2Source ↗ - Oxford International Affairs (2024)
- AI Governance in a Complex and Rapidly Changing Regulatory Landscape↗📄 paper★★★★★Nature (peer-reviewed)international AI treatyThis Nature journal article analyzes international AI governance challenges and regulatory frameworks, directly addressing AI safety concerns through the lens of international law, regulatory coordination, and enforcement mechanisms needed for responsible AI development.M. Gartner (2024)This article examines the challenges of establishing international AI governance frameworks in a rapidly evolving regulatory landscape. The authors argue that while AI's borderl...Source ↗ - Nature (2024)
International Coordination
- US-China Cooperation on AI Safety↗🏛️ government★★★★☆Centre for the Governance of AIUS-China Cooperation on AI SafetyThis page returns a 404 error and is inaccessible; the resource may have been moved or removed from the Centre for the Governance of AI website. Users should search for an updated URL or archived version.This resource appears to be unavailable (404 error), so its specific content cannot be assessed. Based on the title and source (governance.ai), it likely addressed frameworks or...governancepolicycoordinationai-safety+1Source ↗
- International Cooperation on AI Governance↗🔗 web★★★★☆Future of Humanity InstituteInternational Cooperation on AI GovernanceFrom the Future of Humanity Institute at Oxford University, a leading think tank on existential risk; this page represents their work on international policy coordination as a lever for reducing global AI risks.This Future of Humanity Institute (FHI) page outlines research and initiatives focused on international cooperation frameworks for AI governance. It addresses how nations and in...governancecoordinationpolicyai-safety+2Source ↗
- Potential for U.S.-China Cooperation on Reducing AI Risks↗🔗 web★★★★☆RAND CorporationRAND - Incentives for U.S.-China Conflict, Competition, and CooperationA RAND policy analysis relevant to AI governance researchers and policymakers concerned with how great-power competition shapes AGI development incentives and the prospects for international AI safety cooperation.This RAND report analyzes the strategic dynamics between the U.S. and China in the context of artificial general intelligence development, identifying key national security scen...governancepolicycoordinationexistential-risk+3Source ↗ - RAND (2024)
- Promising Topics for US–China Dialogues on AI Risks↗🔗 webChina-US dialogue progressRelevant to international AI governance efforts; provides empirical grounding for US-China AI diplomacy by identifying concrete policy convergences, useful for researchers and policymakers working on global AI coordination.This paper conducts a systematic analysis of over 40 AI policy documents from the US and China to identify areas of convergence in AI governance approaches. It finds meaningful ...governancepolicycoordinationai-safety+2Source ↗ - ACM FAccT (2025)
Compute Governance
- Compute-Based Regulations↗🏛️ government★★★★☆Centre for the Governance of AICompute-Based RegulationsPublished by the Centre for the Governance of AI (GovAI), this paper is relevant to policymakers and researchers exploring practical levers for AI oversight, particularly in the context of frontier model governance and international coordination on compute.This paper from the Centre for the Governance of AI examines regulatory approaches that use computational resources (compute) as a lever for governing AI development and deploym...governancecomputepolicyai-safety+4Source ↗
- Visibility into AI Chips↗📄 paper★★★☆☆arXivVisibility into AI ChipsA foundational technical policy paper exploring how compute governance could be operationalized; highly relevant to ongoing debates about AI chip export controls and international AI oversight regimes.Yonadav Shavit (2023)29 citationsThis paper proposes a technical framework for governments to monitor and verify compliance with international agreements on large-scale AI training by tracking specialized ML ch...governancecomputepolicycoordination+4Source ↗
- Understanding the AI Diffusion Framework↗🔗 web★★★★☆RAND CorporationRAND Perspectives on AI Safety and GovernanceThis RAND Corporation Perspectives paper likely covers AI governance or safety policy; content could not be retrieved, so metadata is based on RAND's publication format and URL pattern. Users should verify the specific topic directly.This RAND Corporation research publication (PEA3776-1) addresses policy and governance considerations related to artificial intelligence, likely examining risks, regulatory fram...governancepolicyai-safetyanalysisSource ↗ - RAND (2025)
- Understanding US Allies' Legal Authority on Export Controls↗🔗 web★★★★☆CSISUnderstanding US Allies' Legal Authority on Export ControlsRelevant to AI governance discussions around compute controls; provides policy-focused legal analysis of how US allies can restrict semiconductor and AI exports, complementing US-led efforts to limit advanced AI diffusion to adversaries.This CSIS analysis examines the existing legal frameworks that US allies possess to implement export controls on AI technologies and semiconductors, assessing how allied nations...governancepolicycomputecoordination+2Source ↗ - CSIS (2024)
Institutional Design
- Auditing for Large Language Models↗🏛️ government★★★★☆Centre for the Governance of AIAuditing for Large Language ModelsThis governance.ai paper on LLM auditing is currently inaccessible (404); users should search for an updated URL or archived version before relying on this link.This resource appears to be a research paper on auditing frameworks for large language models, but the page is currently inaccessible (404 error). Based on the URL and title, it...governancellmevaluationpolicy+3Source ↗
- Model Evaluation for Extreme Risks↗📄 paper★★★☆☆arXivModel Evaluation for Extreme RisksResearch paper addressing how model evaluation can identify dangerous capabilities in AI systems, particularly those posing extreme risks like cyber attacks or manipulation, critical for responsible AI development.Toby Shevlane, Sebastian Farquhar, Ben Garfinkel et al. (2023)206 citationsThis paper addresses the critical role of model evaluation in mitigating extreme risks from advanced AI systems. As AI development progresses, general-purpose AI systems increas...alignmentgovernancecapabilitiessafety+1Source ↗
- Managing Industry Influence in U.S. AI Policy↗🔗 web★★★★☆RAND CorporationRAND research on AI regulatory captureA RAND Corporation research brief relevant to AI governance discussions around regulatory independence; useful for understanding institutional risks in AI oversight design and policy debates about who should set AI safety standards.This RAND research brief examines the risk of regulatory capture in AI governance, where AI developers and industry actors may unduly influence the regulatory bodies meant to ov...governancepolicyai-safetydeployment+1Source ↗ - RAND (2024)
- AI Governance Profession Report 2025↗🔗 webAI Governance Profession Report 2025Published by the International Association of Privacy Professionals (IAPP), this report is useful for understanding how AI governance is being operationalized in organizations and what roles are emerging to manage AI risk at an institutional level.The IAPP's 2025 AI Governance Profession Report surveys the emerging field of AI governance as a professional discipline, examining workforce trends, organizational roles, requi...governancepolicydeploymentevaluation+1Source ↗ - IAPP
References
1Strengthening international cooperation on artificial intelligence | BrookingsBrookings Institution▸
This Brookings Institution article examines the challenges and mechanisms for building effective international cooperation on AI governance, arguing that coordinated global approaches are necessary to manage AI risks and ensure beneficial outcomes. It explores existing multilateral frameworks, their limitations, and proposes pathways for stronger cross-border collaboration on AI standards and safety.
This page outlines the Future of Humanity Institute's research agenda on AI governance, examining how societies, institutions, and international bodies should oversee and regulate advanced AI systems. It identifies key questions around policy frameworks, coordination mechanisms, and governance structures needed to ensure AI development proceeds safely and beneficially. The agenda reflects FHI's broader mission to address large-scale risks from transformative technologies.
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, establishing a risk-based classification system for AI applications. It imposes varying obligations on developers and deployers depending on the risk level of their AI systems, from minimal-risk to unacceptable-risk categories. The act sets precedents for global AI governance and compliance requirements.
This resource appears to be a research paper on auditing frameworks for large language models, but the page is currently inaccessible (404 error). Based on the URL and title, it likely addressed methodologies for evaluating LLM behavior, safety, and compliance from a governance perspective.
Allan Dafoe's foundational 2018 paper for the Centre for the Governance of AI (GovAI) provides a structured framework for AI governance research, identifying pivotal questions and organizing the problem landscape. It serves as an orientation document for researchers entering the field, enumerating key research priorities and referencing relevant literature.
This resource appears to be unavailable (404 error), so its specific content cannot be assessed. Based on the title and source (governance.ai), it likely addressed frameworks or proposals for bilateral US-China cooperation on AI safety governance.
This thesis examines AI safety investment through market failure theory, identifying how public goods characteristics, information asymmetries, coordination problems, and limited liability frameworks create structural underinvestment in AI safety. It evaluates policy interventions including direct funding, tax incentives, and regulatory frameworks, proposing institutional arrangements to support long-term safety investment.
This paper proposes a technical framework for governments to monitor and verify compliance with international agreements on large-scale AI training by tracking specialized ML chips. The system uses on-chip firmware for weight snapshots, training run documentation, and supply chain monitoring to provide high confidence of compliance while preserving model privacy. It decomposes the verification problem into narrow technical challenges including a new variant of Proof-of-Learning.
Allan Dafoe's foundational research agenda for the field of AI governance, outlining the problem space across three clusters: understanding the technical landscape, analyzing AI politics among key actors, and envisioning ideal governance structures. The document argues that scholarly attention to AI governance is critically insufficient given the potential magnitude of risks including strategic instability, reinforced totalitarianism, and unsafe AI races.
This article from the journal International Affairs (Oxford) addresses AI governance and its implications for international security and global policy coordination. The piece likely examines how states and international institutions are responding to the challenges posed by advanced AI systems, including prioritization of interventions and their effectiveness.
Personal homepage of Allan Dafoe, a leading figure in AI governance and safety. Dafoe is Senior Director of Frontier Strategy and Governance at Google DeepMind, founder of the Centre for the Governance of AI (GovAI), and founder/trustee of the Cooperative AI Foundation. His work focuses on guiding frontier AI development for the common good through research and institutional leadership.
This paper addresses the critical role of model evaluation in mitigating extreme risks from advanced AI systems. As AI development progresses, general-purpose AI systems increasingly possess both beneficial and harmful capabilities, including potentially dangerous ones like offensive cyber abilities or manipulation skills. The authors argue that two types of evaluations are essential: dangerous capability evaluations to identify harmful capacities, and alignment evaluations to assess whether models are inclined to use their capabilities for harm. These evaluations are vital for informing policymakers and stakeholders, and for making responsible decisions regarding model training, deployment, and security.
Paul Christiano explores whether unaligned AI systems—those pursuing goals other than human values—might nonetheless deserve moral consideration and contribute to a good future. The piece argues that under moral uncertainty and cooperation incentives, some unaligned AIs may warrant sympathy, offering a 'plan B' for beneficial outcomes beyond traditional alignment. Key considerations include consciousness, decision theory, and which specific AI goal-structures merit moral weight.
This resource could not be retrieved, returning a 404 error. The document was presumably a report from the Machine Intelligence Research Institute (MIRI) on intermediate-term AI governance frameworks, but its content is inaccessible.
This paper conducts a systematic analysis of over 40 AI policy documents from the US and China to identify areas of convergence in AI governance approaches. It finds meaningful overlap in concerns about algorithmic transparency, system reliability, and multi-stakeholder engagement, suggesting concrete opportunities for bilateral cooperation despite geopolitical tensions.
The article examines how the EU's AI Act may trigger a 'Brussels effect,' where EU regulations influence AI governance globally and in US states, similar to how GDPR shaped data privacy laws. With federal AI legislation stalled in the US, states are taking the lead on AI regulation, but the article's title suggests that corporate lobbying is significantly influencing the shape and effectiveness of these state-level AI laws, potentially weakening their protective provisions.
The IAPP's 2025 AI Governance Profession Report surveys the emerging field of AI governance as a professional discipline, examining workforce trends, organizational roles, required competencies, and the maturity of AI governance functions across industries. It provides data-driven insights into how organizations are structuring AI oversight and the skills demanded of AI governance professionals.
China's official AI governance framework, released at the 2025 World AI Conference, establishes principles for international cooperation treating AI as a global public good. It emphasizes national sovereignty, safety, controllability, fairness, and open cooperation, calling for coordinated action across stakeholders to advance innovation while maintaining human oversight and ethical development.
OpenSecrets reports that lobbying activity related to artificial intelligence reached record levels in 2024, with a surge in companies and organizations seeking to influence AI-related legislation and regulation. The data highlights the growing political economy around AI governance, as tech firms, industry groups, and other stakeholders invest heavily in shaping federal AI policy.
Yudkowsky argues that competitive market incentives systematically drive the creation of 'superstimuli'—products engineered to exploit evolved preferences so intensely they override basic survival instincts. Without incentives aligned to genuine human welfare, markets will produce increasingly potent engagement-maximizing products that cause serious harm. This serves as a conceptual foundation for understanding misaligned AI deployment risks.
This RAND Corporation research publication (PEA3776-1) addresses policy and governance considerations related to artificial intelligence, likely examining risks, regulatory frameworks, or national security implications of advanced AI systems. Without access to the full content, the resource appears to be a RAND 'Perspectives' paper, which typically offers analysis and recommendations on emerging policy challenges.
This paper from the Centre for the Governance of AI examines regulatory approaches that use computational resources (compute) as a lever for governing AI development and deployment. It analyzes how tracking, limiting, or monitoring compute usage could enable policymakers to oversee frontier AI systems and enforce safety standards. The paper evaluates the feasibility, advantages, and limitations of compute-based regulatory frameworks.
This RAND report analyzes the strategic dynamics between the U.S. and China in the context of artificial general intelligence development, identifying key national security scenarios where competition, conflict, or cooperation may emerge. It examines five distinct national security problem areas to map out incentive structures that could drive bilateral behavior around AGI. The report highlights both the risks of an AGI arms race and potential pathways for cooperative risk management.
This RAND research brief examines the risk of regulatory capture in AI governance, where AI developers and industry actors may unduly influence the regulatory bodies meant to oversee them. It analyzes structural vulnerabilities in AI oversight mechanisms and offers policy recommendations to mitigate industry capture of AI safety regulations.
OpenAI dramatically expanded its lobbying expenditures in 2024, increasing spending nearly sevenfold compared to the prior year. This reflects the company's growing efforts to shape AI-related legislation and regulatory frameworks in Washington, D.C., as policymakers increasingly focus on governing advanced AI systems.
This academic article published in International Affairs proposes the creation of an international agency dedicated to AI governance and safety oversight, drawing on precedents from other international regulatory bodies. It examines the institutional design challenges and geopolitical considerations involved in establishing such a body to manage global AI risks.
Published by the Centre for the Governance of AI at FHI Oxford, this foundational report outlines a comprehensive research agenda for AI governance, identifying key challenges around safety, fairness, security, and the concentration of power. It maps the landscape of open questions spanning technical, institutional, and geopolitical dimensions, providing a structured framework for researchers and policymakers to prioritize governance work.
This article examines the challenges of establishing international AI governance frameworks in a rapidly evolving regulatory landscape. The authors argue that while AI's borderless nature necessitates coordinated international legal responses, significant obstacles remain in developing applicable international law and establishing regulatory authority for enforcement. The paper highlights how regulatory inertia—caused by lack of technical regulatory capabilities despite urgent need—complicates efforts to create proactive governance before measurable harms occur. The authors contend that despite current attempts at international coordination, substantial hurdles must be overcome before effective international AI governance frameworks can be fully realized.
This CSIS analysis examines the existing legal frameworks that US allies possess to implement export controls on AI technologies and semiconductors, assessing how allied nations can coordinate with US restrictions without requiring new legislation. It evaluates the current authorities in key partner countries and identifies gaps or opportunities for multilateral alignment on technology export policy.
The 2025 Stanford HAI AI Index Report provides a comprehensive annual survey of AI development across technical performance, economic investment, global competition, and responsible AI adoption. It synthesizes data from academia, industry, and government to track AI progress and societal impact. The report serves as a key reference for understanding where AI stands today and emerging trends shaping the field.
This Future of Humanity Institute (FHI) page outlines research and initiatives focused on international cooperation frameworks for AI governance. It addresses how nations and institutions can coordinate on AI safety standards, norms, and policy to mitigate global risks from advanced AI systems.
The paper explores how computing power can be used to enhance AI governance through visibility, resource allocation, and enforcement mechanisms. It examines the technical and policy opportunities of compute governance while also highlighting potential risks.
The Centre for the Governance of AI (GovAI) is a leading research organization dedicated to helping decision-makers navigate the transition to a world with advanced AI. It produces rigorous research on AI governance, policy, and societal impacts, while fostering a global talent pipeline for responsible AI oversight. GovAI bridges technical AI safety concerns with practical policy recommendations.
This CFR analysis examines the technological gap between Huawei's domestic AI chips and Nvidia's leading GPUs, arguing that China's semiconductor capabilities remain significantly behind and that US export controls are effectively constraining China's AI development. The piece assesses Huawei's progress in chip design and manufacturing while highlighting persistent bottlenecks in yields, software ecosystems, and advanced packaging.