Skip to content
Longterm Wiki
Back

ControlAI Risks Page

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Control AI

ControlAI is an AI safety advocacy organization; this page summarizes their public-facing risk narrative and is useful for understanding how safety concerns are communicated to general audiences and policymakers.

Metadata

Importance: 35/100tool pageeducational

Summary

ControlAI's risks page outlines the organization's perspective on the major risks posed by advanced AI systems, likely covering existential, societal, and near-term harms. It serves as a public-facing resource explaining why the organization believes AI risk mitigation and governance are urgent priorities.

Key Points

  • Presents ControlAI's framing of AI risks as serious and requiring urgent policy and governance responses
  • Likely covers both near-term harms and longer-term existential or catastrophic risk scenarios
  • Intended as an accessible, public-facing explainer to build awareness and support for AI safety measures
  • Reflects a governance-focused nonprofit perspective on the AI risk landscape
  • May link to or summarize technical and policy arguments for AI risk reduction

Cited by 1 page

PageTypeQuality
ControlAIOrganization63.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20266 KB
# The Race to AGI: Humanity's Greatest Threat

# Extinction Risks  from AGI

# The Current State of AI Development

AI progress has exploded in the last 10 years, reaching near-human-level capabilities in writing, coding, art, math, and many more fields of human activity. This progress has been driven by deep learning: modern AIs are grown by feeding them massive amounts of data and letting them evolve in response, not built piece by piece by humans.

Researchers and engineers don't need to understand AIs to create them – indeed, experts consistently fail to anticipate how quickly new skills will be unlocked, or how existing AIs work. Progress is bottlenecked only by resources (such as AI chips, electrical power, data) instead of scientific insights. As tech companies and frontier AI companies collaborate to unlock ever more resources, the path leads to increasingly intelligent yet opaque AIs.

# The Path to Superintelligence

How far can this trend of smarter AI go? Looking at how humanity has historically increased its intelligence, three strategies emerge: tools, groups, and methods. AIs can leverage these same strategies; indeed each is already being used in current AI R&D.

Although skeptics claim that various components of intelligence cannot be automated, no existing scientific theory of intelligence backs these opinions. Thus we should assume the trend towards smarter AIs will continue, eventually leading to Artificial General Intelligence (AGI), AIs able to do the same intellectual tasks as humans.

# The Emergence of AGI and Beyond

Since AGI would be able to do anything a human can, it would be able to do AI research and improve its own intelligence. AGI companies and AI researchers are already pushing hard in this direction. Since software is cheaper and more efficient than brains, AGI would improve far faster than any human, eventually reaching artificial superintelligence (ASI) surpassing humanity's collective intelligence.

As it continues to scale, ASI would unlock abilities to shape matter and energy that would look godlike compared to human engineering. Even without malicious intent, these godlike-AIs would by default wipe out humanity as collateral damage while pursuing their own goals, in the same way ants are just collateral damage for contractors building a house.

# The Challenge of AI Alignment

Godlike-AIs lead to catastrophe because of the incredible difficulty of aligning AI's goals with those of humanity. Alignment is the harder version of problems humanity already struggles with: making companies and governments beneficial for what citizens care about.

Solving alignment requires massive progress on questions like finding what we value, reconciling contradictions between values, and predicting consequences to avoid side-effects. This would require decades of research and trillions in investment; yet only a handful of people and a couple hundred million are currently invested, with most effort going towards making AIs 

... (truncated, 6 KB total)
Resource ID: b8f1fc69cce49e85 | Stable ID: Y2VkZjc5MW