OpenAI. Governance of superintelligence
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OpenAI
A high-profile 2023 statement from OpenAI's founders calling for international governance of superintelligent AI; notable as an industry-insider perspective advocating for external oversight of their own technology.
Metadata
Summary
A policy statement by OpenAI's leadership (Sam Altman, Greg Brockman, Ilya Sutskever) arguing that superintelligence may arrive within a decade and requires new international governance frameworks beyond existing AI oversight approaches. It proposes coordination among leading AI labs, government involvement, and an international watchdog body analogous to the IAEA. The piece acknowledges the transformative and potentially dangerous nature of superintelligence while arguing development should continue under improved oversight.
Key Points
- •Superintelligence could arrive within a decade and will be qualitatively different from current AI, requiring new governance approaches.
- •Proposes an international authority (like the IAEA) to oversee and inspect the most powerful AI systems globally.
- •Argues that leading AI developers should coordinate on safety and governance even as they compete commercially.
- •Emphasizes that any governance framework must balance preventing catastrophic misuse while preserving benefits of advanced AI.
- •Acknowledges that normal regulatory processes may be too slow and that proactive institution-building is urgently needed.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Superintelligence | Concept | 92.0 |
Cached Content Preview
Switch to
- [ChatGPT(opens in a new window)](https://chatgpt.com/?openaicom-did=885cd143-6b60-4416-8408-969b01f5e00b&openaicom_referred=true)
- [Sora(opens in a new window)](https://sora.com/)
- [API Platform(opens in a new window)](https://platform.openai.com/)
Governance of superintelligence \| OpenAI
May 22, 2023
[Safety](https://openai.com/news/safety-alignment/)
# Governance of superintelligence
Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

Listen to article
Share
Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.
In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.
We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.
## A starting point
There are many ideas that matter for us to have a good chance at successfully navigating this development; here we lay out our initial thinking on three of them.
First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.
And of course, individual companies should be held to an extremely high standard of acting responsibly.
Second, we are likely to eventually need something like an [IAEA(opens in a new window)](https://www.iaea.org/) for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of securi
... (truncated, 8 KB total)c2e3d7e5c92d5689 | Stable ID: MWYzOWE1Nj