Longterm Wiki
Updated 2026-01-30HistoryData
Page StatusContent
Edited 2 weeks ago2.3k words
47
QualityAdequate
42
ImportanceReference
13
Structure13/15
1017057%22%
Updated every 6 weeksDue in 4 weeks
Summary

Comprehensive synthesis of the AI pause debate showing moderate expert support (35-40% of 2,778 researchers) and high public support (72%) but very low implementation feasibility, with all major labs continuing development despite 33,000+ FLI letter signatures. Alternative approaches like RSPs have seen actual adoption while pause proposals remain politically rejected (US Senate vote 99-1 against moratorium).

Issues2
QualityRated 47 but structure suggests 87 (underrated by 40 points)
Links33 links could use <R> components

Should We Pause AI Development?

Crux

Should We Pause AI Development?

Comprehensive synthesis of the AI pause debate showing moderate expert support (35-40% of 2,778 researchers) and high public support (72%) but very low implementation feasibility, with all major labs continuing development despite 33,000+ FLI letter signatures. Alternative approaches like RSPs have seen actual adoption while pause proposals remain politically rejected (US Senate vote 99-1 against moratorium).

QuestionShould we pause/slow development of advanced AI systems?
Catalyst2023 FLI open letter signed by 30,000+ people
StakesTrade-off between safety preparation and beneficial AI progress
2.3k words
Crux

The AI Pause Debate

QuestionShould we pause/slow development of advanced AI systems?
Catalyst2023 FLI open letter signed by 30,000+ people
StakesTrade-off between safety preparation and beneficial AI progress

In March 2023, the Future of Life Institute published an open letter calling for a 6-month pause on training AI systems more powerful than GPT-4. The letter garnered over 33,000 signatures, including Turing Award winners Yoshua Bengio and prominent figures like Elon Musk and Steve Wozniak. It ignited fierce debate: Is pausing AI development necessary for safety, or counterproductive and infeasible?

Quick Assessment

DimensionAssessmentEvidence
Expert SupportModerate (35-40%)2023 AI Impacts survey: ≈35% of 2,778 AI researchers favor slower development
Public SupportHigh (65-70%)AIPI poll: 72% of Americans prefer slowing AI development
FeasibilityVery LowNo pause implemented despite 33,000+ signatories; major labs continued development
International CoordinationVery LowNo binding agreements; China interest but no commitments
Alternative AdoptionMediumRSPs adopted by Anthropic, OpenAI, Google DeepMind; EU AI Act proceeding
Historical PrecedentMixedAsilomar 1975 succeeded; nuclear/climate coordination partial
Current Status (2025)Pause rejected; regulation fragmentedUS Senate rejected 10-year moratorium 99-1; 1,000+ state AI bills in 2025

Key Links

SourceLink
Official Websiteopen.spotify.com

The Debate Landscape

Loading diagram...

The Proposal

Pause advocates call for:

  • Moratorium on training runs beyond current frontier (GPT-4 level)
  • Time to develop safety standards and evaluation frameworks
  • International coordination on AI governance
  • Only resume when safety can be ensured

Duration proposals vary:

The Spectrum of Positions

Positions on Pausing AI

Range of views from accelerate to indefinite pause

Effective Accelerationists (e/acc)strong-oppose

Believe AI progress is moral imperative. Pausing delays benefits and cedes advantage to others.

Evidence: Techno-optimist philosophy
The only way forward is faster
Confidence: high
Most AI Labs (OpenAI, Google, Anthropic)oppose

Believe pause is infeasible and counterproductive. Prefer responsible scaling with safety evaluations.

Evidence: Continued development; Public statements
We need to move forward responsibly, not pause
Confidence: high
Yann LeCun (Meta)oppose

Doesn't believe existential risk is real. Thinks pause would harm innovation.

Evidence: Public opposition to pause
Pausing AI research would be a mistake
Confidence: high
Yoshua Bengiocautious-support

Signed FLI letter. Concerned about risks but also practical about feasibility.

Evidence: FLI letter signature; Public statements
We need to slow down and think carefully
Confidence: medium
Stuart Russellsupport-slowdown

Argues we're not ready for superintelligence. Advocates slowing down to solve safety.

Evidence: Academic writing; Public advocacy
We're rushing toward something we don't understand
Confidence: high
Eliezer Yudkowskystrong-support

Believes AGI will be catastrophic if unaligned. Advocates indefinite pause until alignment solved.

Evidence: Public writing; Called for international treaty
Shut it all down
Confidence: high
Max Tegmark (FLI)support

Organized the pause letter. Believes we need time for safety and governance.

Evidence: FLI pause letter
Let's not race towards the cliff
Confidence: high

Key Cruxes

Key Questions

  • ?Is a multilateral pause achievable?
    No - impossible to coordinate

    China won't agree. Can't verify. Too many actors. Enforcement impossible.

    Pause is fantasy, focus on alternatives

    Confidence: high
    Yes - with sufficient effort

    Nuclear weapons achieved some coordination. Climate agreements exist. Worth trying.

    Should pursue international coordination

    Confidence: low
  • ?Will we get warning signs before catastrophe?
    Yes - problems will emerge gradually

    Weaker systems will show concerning behaviors first. Can learn and adjust.

    Don't need pause—can iterate safely

    Confidence: medium
    No - fast takeoff or deception

    May jump from safe to dangerous quickly. AI might hide misalignment.

    Need pause to prepare before it's too late

    Confidence: medium
  • ?How much safety progress can happen during a pause?
    Substantial - time helps

    Can develop evaluation frameworks, safety techniques, governance. Time is valuable.

    Pause is worth it

    Confidence: medium
    Minimal - need capable systems

    Safety research requires frontier systems to study. Can't solve alignment in vacuum.

    Pause doesn't help safety

    Confidence: medium
  • ?How significant is the China concern?
    Critical - can't give China advantage

    AI determines future power balance. US pause means China leads. Unacceptable.

    Cannot pause

    Confidence: medium
    Overstated - alignment more important

    Misaligned US AGI isn't better than Chinese AGI. China may coordinate.

    Can consider pause

    Confidence: low

Alternative Proposals

Many propose middle grounds between full pause and unconstrained racing:

Comparison of Alternatives

ApproachMechanismAdoption StatusEffectivenessVerification Difficulty
Responsible Scaling PoliciesIf-then commitments: if dangerous capabilities detected, pause or add safeguardsAnthropic (ASL system), OpenAI (Preparedness Framework), Google DeepMind (Frontier Safety Framework)Medium—depends on evaluation qualityMedium—relies on internal assessments
Compute GovernanceLimit training compute through export controls or compute thresholdsUS export controls (Oct 2022, expanded 2023-2024); EU AI Act thresholdsMedium—slows frontier developmentLow—chip sales are trackable
Safety TaxRequire 10-20% of compute/budget on safety researchProposed but not mandatedLow-Medium—difficult to verify meaningful safety workHigh—"safety" is vaguely defined
Staged DeploymentDevelop models but delay release for safety testingCommon practice at major labsMedium—delays harm but allows capability developmentLow—deployment is observable
International RegistryRegister large training runs with international bodySeoul AI Summit commitments (2024)Low—visibility without enforcementMedium—relies on self-reporting
Threshold-Based PausePause only when specific dangerous capabilities emergeProposed in RSPs; no regulatory mandatePotentially high if thresholds are well-definedHigh—requires robust capability evaluation

Detailed Alternatives

Responsible Scaling Policies (RSPs)

  • Continue development but with if-then commitments
  • If dangerous capabilities detected, implement safeguards or pause
  • Anthropic's approach uses AI Safety Levels (ASL-1 through ASL-4+)
  • As of May 2025, Anthropic activated ASL-3 for Claude Opus 4 due to CBRN concerns

Compute Governance

  • Limit training compute through regulation or voluntary agreement
  • US export controls restrict advanced AI chips to China and ~150 other countries
  • The EU AI Act defines "high-risk" based on compute thresholds (10^25 FLOP)
  • Easier to verify than complete pause—chip production is concentrated in few fabs

Safety Tax

  • Require safety work proportional to capabilities
  • E.g., spend 20% of compute on safety research
  • Maintains progress while prioritizing safety
  • No mandatory implementation; relies on voluntary commitment

Staged Deployment

  • Develop models but delay deployment for safety testing
  • Allows research while preventing premature release

International Registry

  • Register large training runs with international body
  • Creates visibility without stopping work
  • Foundation for future coordination
  • Seoul AI Summit (2024) established voluntary commitments for 16 AI companies

Threshold-Based Pause

  • Continue until specific capability thresholds (e.g., autonomous replication)
  • Then pause until safeguards developed
  • Clear criteria, only activates when needed

The Coordination Problem

Why is coordination so hard? Analysis of AI governance challenges suggests coordination failure is the default outcome absent strong institutional mechanisms.

Key Actors and Their Stakes

Actor CategoryExamplesEstimated AI Investment (2024)Pause Incentive
US Frontier LabsOpenAI, Anthropic, Google DeepMind, Meta$50-100B+ combinedVery Low—first-mover advantage
Chinese LabsBaidu, ByteDance, Alibaba, Tencent$15-30B estimatedVery Low—strategic competition
European LabsMistral, Aleph Alpha$2-5BLow-Medium—regulatory pressure
Open SourceMeta (Llama), HuggingFace, communityDistributedNone—decentralized development
GovernmentsUS, China, EU, UKRegulatory roleMixed—security vs. innovation

Verification challenges:

  • Training runs are secret—only ~10-20 organizations can train frontier models
  • Compute usage is hard to monitor without chip-level tracking
  • Open source development involves 100,000+ contributors globally
  • PauseAI protests in 13 countries (May 2024) had minimal policy impact

Incentive misalignment:

  • First to AGI gains enormous advantage—estimated $1-10T+ value capture
  • Defecting from pause very tempting—6-12 month lead could be decisive
  • Short-term vs long-term tradeoffs favor short-term action
  • National security concerns: US-China AI competition frames pause as "unilateral disarmament"

Precedents suggest pessimism:

PrecedentOutcomeLessons for AI
Asilomar 1975Voluntary pause worked (≈1 year)Smaller field (≈140 scientists); clearer risks; easier verification
Nuclear Non-ProliferationPartial success (9 nuclear states)Slower timelines (decades); clear existential threat; fewer actors
Climate (Paris Agreement)Minimal binding successDiffuse actors; long timelines; enforcement failed
Biological Weapons ConventionNear-universal (187 states) but weakNo verification mechanism; concerns about compliance persist

But some hope:

  • All parties may share existential risk concern—70% of AI researchers want more safety prioritization
  • Industry may support regulation to avoid liability and level playing field
  • Compute is traceable—TSMC and Samsung produce 90%+ of advanced chips; ASML is sole EUV lithography supplier
  • China has expressed interest in international coordination: "only with joint efforts of the international community can we ensure AI technology's safe and reliable development"

What Would Need to Be True for a Pause to Work?

For a pause to be both feasible and beneficial:

ConditionCurrent StatusFeasibility Assessment
Multilateral buy-inNo formal US-China-EU agreementVery Low—geopolitical competition; no active negotiations
VerificationChip tracking possible but not implementedMedium—TSMC/ASML choke points exist; software tracking hard
EnforcementNo international AI enforcement bodyVery Low—would require new institutions
Clear timelineFLI proposed 6 months; Yudkowsky proposes indefiniteLow—no consensus on when "safety solved"
Safety progress70% of researchers want more safety prioritizationMedium—unclear if pause enables progress
AllowancesNot specified in most proposalsMedium—"narrow AI" vs "frontier" line is fuzzy
Political will72% US public supports slowing AIMedium—public support but industry opposition

Current reality: Few of these conditions are met. As FLI noted on the letter's one-year anniversary, AI companies have instead directed "vast investments in infrastructure to train ever-more giant AI systems."

2024-2025 Developments

The pause debate has evolved significantly since the 2023 letter:

Global AI Governance Initiatives

DateDevelopmentImpact on Pause Debate
Nov 2023Bletchley Declaration signed by 28 countriesAcknowledged risks but no pause provisions
May 2024Seoul AI Summit: 16 companies sign voluntary commitmentsRSPs preferred over pause; thresholds remain vague
Feb 2025International AI Safety Report led by Yoshua Bengio100 experts; calls for governance but not pause
Jul 2025US Senate rejects 10-year AI moratorium 99-1Federal pause rejected; 1,000+ state bills instead
Aug 2025EU AI Act general-purpose AI obligations take effectRegulation over pause; no "grace period"

PauseAI Movement

PauseAI, founded in May 2023 by Dutch software entrepreneur Joep Meindertsma, has organized protests across 13+ countries. Their goals include:

  • Temporary pause on training the most powerful general AI systems
  • International AI safety agency similar to IAEA
  • Democratic control over AI development

Despite ongoing activism, no country has implemented binding pause legislation.

Historical Parallels

Comparison of Technology Governance Precedents

CaseDurationSuccessKey Success FactorsApplicability to AI
Asilomar 1975≈1 year moratoriumHighSmall field (≈140 scientists); scientists initiated; clear biological hazardsLow—AI has millions of practitioners; unclear hazard
Nuclear Test BanOngoing since 1963MediumSeismic verification; mutual existential threat; few actors (5-9 nuclear states)Low—more AI actors; no mutual destruction threat
Montreal Protocol1987-presentVery HighClear ozone hole evidence; available CFC substitutes; verifiable productionLow—no AI substitute; benefits are diffuse
Germline Editing2015-presentMediumLow economic stakes; clear ethical violation (He Jiankui prosecuted)Low—AI has massive economic stakes
Biological Weapons Convention1972-presentLow187 states parties but no verification mechanismMedium—similar verification challenges

Asilomar Conference on Recombinant DNA (1975):

  • Scientists voluntarily paused research on genetic engineering for approximately one year
  • ~140 biologists, lawyers, and physicians developed safety guidelines at Pacific Grove, California
  • Moratorium was "universally observed" in academic and industrial research centers
  • Led to NIH Recombinant DNA Advisory Committee and safety protocols still in use today
  • Key difference: Scientists controlled the technology; AI development involves thousands of companies and millions of developers

Nuclear Test Ban Treaties:

  • Partial Test Ban Treaty (1963): banned atmospheric testing—verified by detection networks
  • Comprehensive Test Ban Treaty (1996): signed by 187 states but not ratified by US, China, or others
  • Verification via seismology is feasible; 9 states now possess nuclear weapons
  • Key difference: Decades-long timeline allowed governance to develop; AI timelines may be 5-15 years

Ozone Layer (Montreal Protocol):

  • Successfully phased out CFCs globally—ozone hole now recovering
  • Required finding chemical substitutes (HFCs) and industry buy-in
  • Key difference: Clear, measurable environmental indicator; AI risks are speculative and contested

Moratorium on Human Germline Editing:

  • Mostly holding after He Jiankui's 2018 violation (3-year prison sentence in China)
  • Low economic stakes compared to AI; clear ethical consensus across cultures
  • Key difference: AI development has estimated $1-10T+ in value at stake

The Case for "Slowdown" Rather Than "Pause"

Many find middle ground more palatable. Yoshua Bengio, Turing Award winner and lead author of the International AI Safety Report, has advocated for "red lines" that AI systems should never cross rather than a blanket pause:

  • Autonomous replication or improvement
  • Dominant self-preservation and power seeking
  • Assisting in weapon development
  • Cyberattacks and deception

Slowdown means:

  • Deliberate rather than maximize speed
  • Investment in safety alongside capabilities
  • Coordination with other labs
  • Voluntary agreements where possible

More achievable because:

  • Doesn't require stopping completely
  • Maintains progress on benefits
  • Reduces but doesn't eliminate competition
  • Easier political sell

Examples of slowdown mechanisms:


Expert Perspectives

Summary of Key Positions

ExpertAffiliationPositionKey Quote
Eliezer YudkowskyMIRIIndefinite shutdown"Shut it all down" (TIME, 2023)
Yoshua BengioMila, Turing laureateInternational governance + red lines"We succeeded in regulating nuclear weapons... we can reach a similar agreement for AI"
Max TegmarkMIT, FLI6-month pauseOrganized FLI letter; continues advocacy
Dario AmodeiAnthropic CEORSPs, not pauseSupports conditional pauses if capabilities exceed safeguards
Sam AltmanOpenAI CEOOpposed to pauseAdvocates international governance but continued development
Yann LeCunMeta AIStrongly opposedPublic opposition to pause as "counterproductive"

The Disagreement Structure

Most disagreement reduces to different assessments of:

QuestionPause SupportersPause Opponents
Current risk levelASL-3/high-risk thresholds being crossedRisks are speculative; benefits concrete
Coordination feasibilityAsilomar precedent shows it's possibleChina won't agree; enforcement impossible
Safety progress during pauseTime enables governance developmentSafety research requires frontier systems
Competitive dynamicsMisaligned AI is worse than losing raceCeding advantage to China unacceptable
Alternative effectivenessRSPs are "safety-washing"; insufficientRSPs provide proportional protection

Sources & Further Reading

Related Pages

Top Related Pages

Concepts

Google DeepMindInternational Coordination

Key Debates

Open vs Closed Source AIGovernment Regulation vs Industry Self-Governance

Safety Research

Scalable Oversight

Policy

Pause / MoratoriumVoluntary AI Safety Commitments

Organizations

Alignment Research CenterUS AI Safety Institute

Approaches

AI Safety Cases

Risks

Multipolar Trap (AI Development)AI Authoritarian Tools

Analysis

OpenAI Foundation Governance ParadoxLong-Term Benefit Trust (Anthropic)

Models

International AI Coordination Game ModelAI Regulatory Capacity Threshold Model

Transition Model

Lab Behavior

Labs

GovAI

People

Yoshua BengioStuart Russell