Skip to content
Longterm Wiki
Back

Center for AI Safety: Catastrophic Risks

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Center for AI Safety

Published by the Center for AI Safety (CAIS), a prominent nonprofit in the AI safety space; this page provides an accessible taxonomy of catastrophic AI risks suitable for newcomers and policymakers, summarizing concerns shared by many researchers in the field.

Metadata

Importance: 62/100organizational reporteducational

Summary

The Center for AI Safety's catastrophic risks page outlines the major categories of risk from advanced AI systems, including misaligned AI, misuse by malicious actors, and structural risks to society. It serves as an accessible entry point for understanding why AI safety researchers consider certain AI development trajectories potentially civilization-threatening. The page synthesizes key concerns from the AI safety research community into a clear public-facing framework.

Key Points

  • Identifies multiple distinct catastrophic risk categories including AI misalignment, deliberate misuse, and systemic societal risks
  • Explains how advanced AI could threaten human oversight and control if not developed carefully
  • Covers risks from both accidental failures (misaligned goals) and intentional harms (bioweapons, cyberattacks)
  • Highlights structural risks such as AI-enabled power concentration and erosion of democratic institutions
  • Serves as a public-facing resource from a prominent AI safety organization to build awareness of existential concerns

Cited by 1 page

PageTypeQuality
AI Risk Interaction MatrixAnalysis65.0

Cached Content Preview

HTTP 200Fetched Feb 23, 202629 KB
AI Risks that Could Lead to Catastrophe | CAIS 
 

 

 

 
 AI risk Resources
 Contact Careers Donate Resources
 AI Risk Contact Careers Donate Careers Donate 
 
 

 CAIS 2024 Impact Report CAIS 2024 Impact Report 
 
 Risks from AI

 An Overview of Catastrophic AI Risks

 Artificial intelligence (AI) has recently seen rapid advancements, raising concerns among experts, policymakers, and world leaders about its potential risks. As with all powerful technologies, advanced AI must be handled with great responsibility to manage the risks and harness its potential. 

 Narrated Rendition:

 
 
 
 The narration covers the full paper, offering more depth than the overview.

 Sections

 Introduction Malicious Use AI Race Organizational Risks Rogue AIs Conclusion FAQ Catastrophic AI risks can be grouped under four key categories which are summarized below. 

 Consider reading the full paper this summary is based on for our most comprehensive overview of AI risk. Read the full paper 
 
 Malicious use : People could intentionally harness powerful AIs to cause widespread harm. AI could be used to engineer new pandemics or for propaganda, censorship, and surveillance, or released to autonomously pursue harmful goals. To reduce these risks, we suggest improving biosecurity, restricting access to dangerous AI models, and holding AI developers liable for harms.
 AI race : Competition could push nations and corporations to rush AI development, relinquishing control to these systems. Conflicts could spiral out of control with autonomous weapons and AI-enabled cyberwarfare. Corporations will face incentives to automate human labor, potentially leading to mass unemployment and dependence on AI systems. As AI systems proliferate, evolutionary dynamics suggest they will become harder to control. We recommend safety regulations, international coordination, and public control of general-purpose AIs.
 Organizational risks : There are risks that organizations developing advanced AI cause catastrophic accidents, particularly if they prioritize profits over safety. AIs could be accidentally leaked to the public or stolen by malicious actors, and organizations could fail to properly invest in safety research. We suggest fostering a safety-oriented organizational culture and implementing rigorous audits, multi-layered risk defenses, and state-of-the-art information security.
 Rogue AIs : We risk losing control over AIs as they become more capable. AIs could optimize flawed objectives, drift from their original goals, become power-seeking, resist shutdown, and engage in deception. We suggest that AIs should not be deployed in high-risk settings, such as by autonomously pursuing open-ended goals or overseeing critical infrastructure, unless proven safe. We also recommend advancing AI safety research in areas such as adversarial robustness, model honesty, transparency, and removing undesired capabilities.
 1

 .

 Introduction

 Today’s technological era would astonish past gener

... (truncated, 29 KB total)
Resource ID: 100d9eb9a2e8ffa8 | Stable ID: YTBkYmExMT