Back
OpenAI's advocacy for licensing
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OpenAI
This is an official OpenAI position piece by Sam Altman and Greg Brockman outlining their views on superintelligence governance; useful for understanding how leading AI labs frame the need for licensing and international oversight.
Metadata
Importance: 62/100blog postprimary source
Summary
OpenAI's blog post argues that superintelligence may arrive sooner than expected and calls for new governance frameworks, including international coordination and licensing regimes for the most powerful AI systems. It outlines OpenAI's views on how society should prepare for and oversee AI systems that could surpass human-level capabilities across most domains.
Key Points
- •Superintelligence could arrive within the current decade, requiring proactive governance structures before systems become ungovernable.
- •OpenAI advocates for licensing requirements for frontier AI developers to ensure accountability and safety standards.
- •International coordination is deemed essential to prevent races to the bottom on safety among competing nations or companies.
- •Existing regulatory institutions may be insufficient; new bodies analogous to the IAEA or nuclear regulators may be needed for AI.
- •The post acknowledges tension between open development and the need for safety-focused control over the most powerful systems.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Governance and Policy | Crux | 66.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20267 KB
OpenAI
May 22, 2023
[Safety](https://openai.com/news/safety-alignment/)
# Governance of superintelligence
Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

Loading…
Share
Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.
In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.
We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.
## A starting point
There are many ideas that matter for us to have a good chance at successfully navigating this development; here we lay out our initial thinking on three of them.
First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.
And of course, individual companies should be held to an extremely high standard of acting responsibly.
Second, we are likely to eventually need something like an [IAEA(opens in a new window)](https://www.iaea.org/) for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implemen
... (truncated, 7 KB total)Resource ID:
825843053766d808 | Stable ID: MzFhM2NlYm