Skip to content
Longterm Wiki
Back

OpenAI. Governance of superintelligence

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: OpenAI

A high-profile 2023 statement from OpenAI's founders calling for international governance of superintelligent AI; notable as an industry-insider perspective advocating for external oversight of their own technology.

Metadata

Importance: 72/100blog postprimary source

Summary

A policy statement by OpenAI's leadership (Sam Altman, Greg Brockman, Ilya Sutskever) arguing that superintelligence may arrive within a decade and requires new international governance frameworks beyond existing AI oversight approaches. It proposes coordination among leading AI labs, government involvement, and an international watchdog body analogous to the IAEA. The piece acknowledges the transformative and potentially dangerous nature of superintelligence while arguing development should continue under improved oversight.

Key Points

  • Superintelligence could arrive within a decade and will be qualitatively different from current AI, requiring new governance approaches.
  • Proposes an international authority (like the IAEA) to oversee and inspect the most powerful AI systems globally.
  • Argues that leading AI developers should coordinate on safety and governance even as they compete commercially.
  • Emphasizes that any governance framework must balance preventing catastrophic misuse while preserving benefits of advanced AI.
  • Acknowledges that normal regulatory processes may be too slow and that proactive institution-building is urgently needed.

Cited by 1 page

PageTypeQuality
SuperintelligenceConcept92.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20268 KB
Switch to

- [ChatGPT(opens in a new window)](https://chatgpt.com/?openaicom-did=885cd143-6b60-4416-8408-969b01f5e00b&openaicom_referred=true)
- [Sora(opens in a new window)](https://sora.com/)
- [API Platform(opens in a new window)](https://platform.openai.com/)

Governance of superintelligence \| OpenAI

May 22, 2023

[Safety](https://openai.com/news/safety-alignment/)

# Governance of superintelligence

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

![An abstract night sky digital painting featuring a swirling deep blue background with scattered yellow stars and a glowing moon in the upper left.](https://images.ctfassets.net/kftzwdyauwt9/4Yf8J4JzpwrPd721DYhT4J/f143a7266f7eb69a5565e6613ab08f78/Governance_of_superintelligence.png?w=3840&q=90&fm=webp)

Listen to article

Share

Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.

In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.

We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.

## A starting point

There are many ideas that matter for us to have a good chance at successfully navigating this development; here we lay out our initial thinking on three of them.

First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.

And of course, individual companies should be held to an extremely high standard of acting responsibly.

Second, we are likely to eventually need something like an [IAEA⁠(opens in a new window)](https://www.iaea.org/) for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of securi

... (truncated, 8 KB total)
Resource ID: c2e3d7e5c92d5689 | Stable ID: MWYzOWE1Nj