Back
historical analogues research
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: RAND Corporation
Published by RAND, this report is useful for policy researchers and governance practitioners seeking historical context for AI risk debates, though content could not be fully verified due to missing page text.
Metadata
Importance: 55/100organizational reportanalysis
Summary
A RAND Corporation research report examining historical analogies—such as nuclear weapons, biotechnology, and other transformative technologies—to inform governance frameworks and policy responses for advanced AI. The report draws lessons from past technological transitions to help policymakers anticipate and manage risks associated with increasingly powerful AI systems.
Key Points
- •Analyzes historical cases of transformative and dual-use technologies to extract governance lessons applicable to advanced AI
- •Examines how past technology governance efforts succeeded or failed, offering models for AI oversight institutions
- •Considers analogies such as nuclear weapons, recombinant DNA, and the internet to frame AI risk management
- •Provides policy-relevant frameworks for governments and institutions seeking to regulate or guide AI development
- •Highlights limitations of historical analogies, noting ways AI may present novel challenges not captured by prior cases
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Policy Effectiveness | Analysis | 64.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20266 KB
- [Skip to page content](https://www.rand.org/pubs/research_reports/RRA3408-1.html#page-content)
The author assesses similarities and differences between four historical examples of technology governance—nuclear technology, the Internet, encryption products, and genetic engineering—and artificial intelligence (AI) to identify lessons for AI governance. The author then discusses the importance of norms in technology governance, challenges in governing physical versus nonphysical assets, and partnerships between the public and private sectors.
# Historical Analogues That Can Inform AI Governance
[Michael J. D. Vermeer](https://www.rand.org/about/people/v/vermeer_michael_j_d.html)
ResearchPublished Aug 19, 2024
- [Share on LinkedIn](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.rand.org%2Fpubs%2Fresearch_reports%2FRRA3408-1.html&title=Historical+Analogues+That+Can+Inform+AI+Governance)
- [Share on X](https://x.com/intent/tweet?url=https%3A%2F%2Fwww.rand.org%2Fpubs%2Fresearch_reports%2FRRA3408-1.html&text=Historical+Analogues+That+Can+Inform+AI+Governance&via=RANDCorporation)
- [Share on Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.rand.org%2Fpubs%2Fresearch_reports%2FRRA3408-1.html)
- [Email](mailto:?subject=RAND%3A%20Historical%20Analogues%20That%20Can%20Inform%20AI%20Governance&body=From%20RAND%0A%0AHistorical%20Analogues%20That%20Can%20Inform%20AI%20Governance%0A%0AThe%20author%20assesses%20four%20historical%20examples%20of%20technology%20governance%E2%80%94nuclear%20technology%2C%20the%20Internet%2C%20encryption%20products%2C%20and%20genetic%20engineering%E2%80%94to%20identify%20lessons%20for%20governance%20of%20artificial%20intelligence.%0A%0Ahttps%3A%2F%2Fwww.rand.org%2Fpubs%2Fresearch_reports%2FRRA3408-1.html)
The increasing capabilities of artificial intelligence (AI) have raised questions about how to maximize its benefits while minimizing the risks that it may pose. Similar questions regarding governance have arisen for four technologies in recent decades: nuclear technology, the Internet, encryption products, and genetic engineering. The histories of their governance illustrate three themes: the need for consensus on norms for the technology, important differences in the governance of physical and nonphysical assets, and the role of partnerships between the public and private sectors in governance. These technologies vary in terms of the resources they require to compete in their development, whether their uses are primarily in the public or private sector, the security risks they pose, consensus on these risks, their physical assets that can be controlled, whether they can foster a community of shared interests for governance, and whether public or private governance is best suited for minimizing risks and maximizing benefits.
## Key Findings
- AI that poses serious risks of broad harm, requires substantial resources to acquire and use, and has physical assets that can be monitored and
... (truncated, 6 KB total)Resource ID:
88a8241bd9872820 | Stable ID: ZWYyMGQ1Nj