Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Centre for the Governance of AI

A seminal 2018 document from GovAI/FHI that helped establish AI governance as a formal research field; widely cited as a foundational framing of the governance problem space and useful orientation for researchers entering the field.

Metadata

Importance: 82/100organizational reportprimary source

Summary

Allan Dafoe's foundational research agenda for the field of AI governance, outlining the problem space across three clusters: understanding the technical landscape, analyzing AI politics among key actors, and envisioning ideal governance structures. The document argues that scholarly attention to AI governance is critically insufficient given the potential magnitude of risks including strategic instability, reinforced totalitarianism, and unsafe AI races.

Key Points

  • Organizes AI governance research into three clusters: technical landscape, AI politics (firms/governments/publics), and ideal governance structures for advanced AI transition.
  • Identifies major risks: labor displacement, inequality, oligopolistic market structure, reinforced totalitarianism, shifts in national power, strategic instability, and safety-sacrificing AI races.
  • Argues consequences of advanced AI are of a magnitude and timescale to dwarf other global concerns, yet scholarly attention remains negligible.
  • Emphasizes that AI governance challenges are often time-sensitive and unlikely to be addressed by market forces alone, requiring proactive policy research.
  • Produced by the Centre for the Governance of AI at Oxford's Future of Humanity Institute; represents a foundational framing document for the GovAI research program.

Cited by 1 page

PageTypeQuality
Governance-Focused WorldviewConcept67.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202698 KB
AI Governance:
A Research Agenda
Allan Dafoe
Centre for the Governance of AI
Future of Humanity Institute
University of Oxford
First draft July 2017
v1.0 August 27 2018
Visit fhi.ox.ac.uk/govaiagenda to check for the most recent version of this paper.

-- 1 of 54 --

AI Governance: A Research Agenda
Abstract1
Artificial intelligence (AI) is a potent general purpose technology. Future progress could be
rapid, and experts expect that superhuman capabilities in strategic domains will be achieved
in the coming decades. The opportunities are tremendous, including advances in medicine
and health, transportation, energy, education, science, economic growth, and environmental
sustainability. The risks, however, are also substantial and plausibly pose extreme
governance challenges. These include labor displacement, inequality, an oligopolistic global
market structure, reinforced totalitarianism, shifts and volatility in national power, strategic
instability, and an AI race that sacrifices safety and other values. The consequences are
plausibly of a magnitude and on a timescale to dwarf other global concerns. Leaders of
governments and firms are asking for policy guidance, and yet scholarly attention to the AI
revolution remains negligible. Research is thus urgently needed on the AI governance
problem: the problem of devising global norms, policies, and institutions to best ensure the
beneficial development and use of advanced AI.
This report outlines an agenda for this research, dividing the field into three research clusters.
The first cluster, the technical landscape, seeks to understand the technical inputs,
possibilities, and constraints for AI. The second cluster, AI politics, focuses on the political
dynamics between firms, governments, publics, researchers, and other actors. The final
research cluster of AI ideal governance envisions what structures and dynamics we would
ideally create to govern the transition to advanced artificial intelligence.
Visit www.fhi.ox.ac.uk/govaiagenda to check for the most recent version of this paper.
1 This document received input from many contributors. The text was primarily written by Allan Dafoe. Several portions
received substantial input from other individuals, most affiliated with the Governance of AI Program, noted where for each
portion. This work draws from the body of thinking and insight in the community of scholars and scientists thinking about these
issues. In particular, for comments, conversations, and related work this document benefits from Miles Brundage, Owen
Cotton-Barratt, Richard Danzig, Daniel Dewey, Owain Evans, Paul de Font-Reaulx, Genevieve Fried, Ben Garfinkel, Katja Grace,
Rox Heston, Geoffrey Irving, Charlotte Jander, Jade Leung, Chris Olah, Catherine Olsson, Cullen O’Keefe, Andrew Snyder-Beattie,
Ken Schultz, Claudia Shi, Duncan Snidal, Nate Soares, Jaan Tallinn, Robert Trager, Helen Toner, Brian Tse, Eliezer Yudkowsky,
Baobao Zhang, and especially Dario Amodei, Nick Bostrom, Owain Evan

... (truncated, 98 KB total)
Resource ID: 323869129eb799dc | Stable ID: OWRiNmE2Zj