Skip to content
Longterm Wiki
Navigation
Updated 2026-01-28HistoryData
Page StatusContent
Edited 2 months ago3.3k words1 backlinksUpdated every 6 weeksOverdue by 22 days
53QualityAdequate •43.5ImportanceReference64.5ResearchModerate
Content8/13
SummaryScheduleEntityEdit historyOverview
Tables26/ ~13Diagrams2/ ~1Int. links2/ ~27Ext. links79/ ~17Footnotes0/ ~10References10/ ~10Quotes0Accuracy0RatingsN:4.5 R:5.8 A:4.2 C:6.5Backlinks1
Issues3
QualityRated 53 but structure suggests 93 (underrated by 40 points)
Links29 links could use <R> components
StaleLast edited 67 days ago - may need review

Novel / Unknown Approaches

Capability

Novel / Unknown Approaches

Analyzes probability (1-15%) of novel AI paradigms emerging before transformative AI, systematically reviewing historical prediction failures (expert AGI timelines shifted 43 years in 4 years, 13 years in one survey cycle) and comparing alternative approaches like neuro-symbolic (8-15% probability), SSMs (5-12%), and NAS (15-30%). Concludes current paradigm faces quantified limits (data exhaustion ~2028, compute costs approaching economic constraints) but near-term timelines favor incumbent approaches.

3.3k words · 1 backlinks
SourceLink
Official Websitestockholmresilience.org
Wikipediaen.wikipedia.org

Overview

This category represents the probability mass we should assign to approaches not yet discovered or not included in our current taxonomy. History shows that transformative technologies often come from unexpected directions, and intellectual humility requires acknowledging this. The field of AI has undergone cyclical periods of growth and decline, known as AI summers and winters, with each cycle bringing unexpected architectural innovations. We are currently in the third AI summer, characterized by the transformer paradigm, but historical patterns suggest eventual disruption.

The challenge of forecasting AI development is well-documented. According to 80,000 Hours' analysis of expert forecasts, mean estimates on Metaculus for when AGI will be developed plummeted from 50 years to 5 years between 2020 and 2024. The AI Impacts 2023 survey found machine learning researchers expected AGI by 2047, compared to 2060 in the 2022 survey. This 13-year shift in a single year demonstrates the difficulty of prediction in this domain.

Beyond the "known unknowns" such as scaling limits and alignment challenges, we face a vast terrain of "unknown unknowns": emergent capabilities, unforeseen risks, and transformative shifts that defy prediction. The technology itself is evolving so rapidly that even experts struggle to predict its capabilities 6 months ahead.

Estimated probability of being dominant at transformative AI: 1-15% (range reflects timeline uncertainty; shorter timelines favor current paradigms, longer timelines favor novel approaches)

Why Include This Category

Diagram (loading…)
flowchart TB
  subgraph known["Known Approaches"]
      transformers["Transformers"]
      moe["Sparse/MoE"]
      ssm["SSMs"]
      neuro["Neuromorphic"]
      other["Other Known"]
  end

  subgraph unknown["Unknown Territory"]
      notyet["Not Yet Discovered"]
      overlooked["Overlooked Ideas"]
      combinations["Novel Combinations"]
      physics["New Physics?"]
  end

  known -->|"Sum to ≈85-99%"| total["Total Probability"]
  unknown -->|"Residual 1-15%"| total

Arguments for Allocating Probability Here

ArgumentExplanationHistorical Evidence
Historical track recordMajor breakthroughs often unexpectedTransformer attention mechanism existed since 2014; breakout came in 2017
Epistemic humilityWe don't know what we don't knowExpert AI timeline estimates shifted 13 years in one survey cycle
Active researchMany smart people working on new ideas63% of neuro-symbolic papers focus on learning/inference innovation
Combinatorial spacePossible architectures vastly exceed exploredNAS tools discovering architectures matching human-designed ones
Scaling approaching limitsCurrent paradigm may hit ceilingEpoch AI predicts high-quality text data exhausted by 2028

Arguments Against High Probability

ArgumentExplanationSupporting Evidence
Current approaches workingTransformers haven't hit hard ceilingTraining compute grew 5x/year 2020-2024
Incremental progressBreakthroughs usually build on existing workGen AI built on cloud, which built on internet
Selection effectsBest ideas tend to be discovered earlyAttention, backprop, deep networks all pre-2000 concepts
Time constraintsLimited years until TAI (if near)Median expert estimate: AGI by 2047
Investment momentum$109B US AI investment in 2024Massive resources dedicated to current paradigm

Historical Precedents

The history of technology provides crucial context for estimating the probability of paradigm shifts. As documented by research on technological paradigm shifts, notable figures consistently fail to predict transformative changes. Wilbur Wright famously said in 1901 that "man would not fly for 50 years"; two years later, he and his brother achieved flight.

Past Paradigm Shifts in AI

ShiftYearFromToLead TimeWas It Predicted?Impact
Neural network revival2012Symbolic AIDeep learning30+ yearsPartially (by few)AlexNet: 15% error reduction on ImageNet
Attention/transformers2017RNNs/CNNsTransformers3 years (attention existed 2014)Somewhat surprisingEnabled 100B+ parameter models
Scaling laws2020"Need new ideas""Just scale"N/ASurprising to manyKaplan et al. showed predictable improvement
In-context learning2020Fine-tuningPromptingN/ANot predictedGPT-3 few-shot emerged unexpectedly
RLHF effectiveness2022Supervised onlyRLHF5 yearsSomewhat expectedChatGPT achieved 100M users in 2 months
Reasoning models2024Pre-training focusPost-training scalingN/ANot predictedNovel RL techniques changed compute allocation

Expert Forecasting Track Record

Forecast SourceYear MadePredictionActual OutcomeError
Metaculus AGI median2020≈2070Now estimate ≈202743 years shift
AI Impacts survey2022AGI by 2060Updated to 2047 (2023)13 years shift
LEAP panel superforecasters2024MATH benchmark 14% by 2026GPT-5.2 achieved 33% in 20252.4x underestimate
FrontierMath experts202431% accuracy by end 202529% achieved Aug 2025Roughly accurate

Lessons from History

LessonImplicationQuantified Example
Old ideas reviveAttention was known; transformers made it work3-year gap between attention (2014) and transformers (2017)
Combinations matterTransformer = attention + layernorm + scaleMultiple paradigms combine to create breakthroughs
Empirical surprisesIn-context learning emerged unexpectedlyZero capability below ≈1B params, then emergent
Scaling surprisesScaling laws weren't obvious a priori5x/year compute growth 2020-2024
Experts underestimateSpecialists often wrong about own fieldWilbur Wright: "50 years", achieved in 2

Potential Sources of Novelty

Paradigm Shift Candidates Comparison

The following table compares the most promising alternative paradigms based on current research momentum and potential impact.

ParadigmMaturityResearch MomentumKey AdvantageKey LimitationEst. Probability of Dominance by 2040
Neuro-Symbolic AIGrowing63% of papers focus on learning/inferenceCombines reasoning + learningScalability/joint-training remains "holy grail"8-15%
State Space ModelsEarlyMamba, RWKV active developmentLinear complexity vs quadratic attentionHaven't matched transformer performance at scale5-12%
Neural Architecture SearchMaturingNASNet, EfficientNet production-readyAI-designed architecturesOften optimizes within existing paradigms3-8%
Neuromorphic ComputingEarlyIntel Loihi, IBM TrueNorth1000x energy efficiencySoftware ecosystem immature2-5%
Quantum MLNascentNISQ-era experimentsExponential state spaceCoherence, error correction unsolved1-3%
World ModelsGrowingVideo prediction, roboticsCausal understandingData requirements unclear5-10%
True UnknownN/AN/ACannot be characterizedCannot be characterized1-5%

Areas Where Breakthroughs Might Emerge

AreaPotentialCurrent StatusKey Research GroupsTimeline Estimate
Learning algorithmsBeyond backprop/SGDActive researchDeepMind, Anthropic3-7 years
ArchitecturesBeyond attentionSSMs gaining tractionMamba team, RWKV2-5 years
Objective functionsBeyond token predictionMinimal progressAcademic labs5-10 years
Training paradigmsBeyond supervised/RLPost-training scaling emergingOpenAI, Anthropic1-3 years
Hardware-software co-designNovel compute substratesNeuromorphic, analogIntel, IBM, startups5-15 years
AI-for-AIAI designing AIAutoML/NAS advancingGoogle, Microsoft2-5 years

Specific Speculative Directions

DirectionDescriptionCurrent EvidenceProbability of Major ImpactKey Uncertainties
Algorithmic breakthroughsNew training methods beyond gradient descentForward-forward algorithm (Hinton 2022)10-25%Whether alternatives can match scale
Physics-based computingQuantum, analog, opticalGoogle quantum supremacy claims3-8%Error correction, coherence
Biological insightsFrom neuroscienceSparse coding, predictive processing5-15%Translation to algorithms
Emergent capabilitiesUnexpected abilities at scaleIn-context learning, chain-of-thoughtOngoing (certain)Which capabilities next
AI-discovered AIAI designs better architecturesNAS matches human designs15-30%Search space definition
Causal/world modelsMove beyond correlationCausal AI research growing10-20%Scalable causal inference

Paradigm Evolution Dynamics

The following diagram illustrates potential pathways for paradigm evolution, including both incremental improvements and discontinuous shifts.

Diagram (loading…)
flowchart TD
  subgraph current["Current Paradigm (2020-2025)"]
      T[Transformers] --> S[Scaling Laws]
      S --> PT[Pre-training Dominance]
      PT --> RL[RL Post-training 2024]
  end

  subgraph limits["Approaching Limits"]
      D[Data Exhaustion<br/>Est. 2028]
      C[Compute Costs<br/>7T USD proposal]
      E[Energy Constraints<br/>32% growth needed]
  end

  subgraph candidates["Paradigm Candidates"]
      NS[Neuro-Symbolic<br/>8-15% probability]
      SSM[State Space Models<br/>5-12% probability]
      NAS[AI-Designed AI<br/>15-30% probability]
      WM[World Models<br/>5-10% probability]
      UK[Unknown Unknown<br/>1-5% probability]
  end

  subgraph outcomes["Possible Outcomes"]
      INC[Incremental Extension<br/>Current paradigm continues]
      HYB[Hybrid Integration<br/>Multiple paradigms merge]
      DIS[Discontinuous Shift<br/>Novel paradigm dominates]
  end

  RL --> D
  RL --> C
  RL --> E

  D --> NS
  D --> SSM
  D --> NAS
  D --> WM
  D --> UK

  C --> NS
  C --> SSM
  C --> NAS

  NS --> HYB
  SSM --> INC
  NAS --> HYB
  WM --> HYB
  UK --> DIS

  style UK fill:#fff4e6,stroke:#ffa94d
  style DIS fill:#fff4e6,stroke:#ffa94d
  style T fill:#d0ebff,stroke:#339af0
  style HYB fill:#d3f9d8,stroke:#40c057

What Novel Approaches Might Look Like

Possible Characteristics

CharacteristicExplanationCurrent Paradigm ComparisonHistorical Precedent
More efficientOrders of magnitude less computeGPT-4: ≈10^25 FLOP trainingDeepSeek: 95% fewer resources claimed for similar performance
Different trainingNot gradient descentBackprop since 1986Forward-forward algorithm (Hinton 2022)
Different objectivesNot next-token predictionAutoregressive LLMs dominantWorld models, energy-based models
Different hardwareNot GPUsNVIDIA dominatesNeuromorphic: 1000x energy efficiency potential
Different capabilitiesStrong at what transformers struggle withReasoning, planning, efficiencyNeuro-symbolic: explicit reasoning

Current Paradigm Constraints (Drivers of Potential Shift)

According to Epoch AI's scaling analysis, the current paradigm faces several quantifiable constraints:

ConstraintCurrent StatusProjected ExhaustionImplication
Training DataHigh-quality text near exhaustion2028 median estimateNew data sources or paradigms needed
Compute Costs$7 trillion infrastructure proposal (Altman 2024)Investors prefer 10x incrementsEconomic limits approaching
EnergyData centers need 32% yearly growthGrid capacity constraintsPhysical infrastructure bottleneck
RL ScalingLabs report 1-2 year sustainabilityCompute infrastructure limitsPost-training gains may plateau
Model SizeGPT-4: ≈1.8 trillion params (estimated)Diminishing returns observedArchitecture efficiency matters more

Warning Signs We Might Miss Something

SignWhat It SuggestsQuantified Evidence
Fundamental capability ceilingsCurrent approaches hitting limitsReasoning models required novel techniques beyond scaling
Efficiency gaps with biologyBrains use far less energyHuman brain: ~20W; GPT-4 inference: ≈100kW
Certain tasks remain hardReasoning, planning, learning efficiencyNeuro-symbolic needed for explicit reasoning
Theoretical gapsDon't understand why current methods workOnly 5% of neuro-symbolic papers address meta-cognition
Benchmark saturationEasy benchmarks solvedGPT-5.2 hit 33% on LiveCodeBench Pro

Safety Implications

A paradigm shift in AI development would have profound implications for AI safety research. The Stanford HAI AI Index 2025 notes that safety research investment trails capability investment by approximately 10:1. A novel paradigm could either invalidate existing safety research or provide new opportunities for alignment.

Why Novel Approaches Are Concerning

ConcernExplanationRisk LevelMitigation Difficulty
UnpredictabilityCan't prepare for unknown risksHighVery High
Rapid capability jumpsNew paradigm might be much more capableVery HighHigh
Different failure modesSafety research might not transferHighMedium
Misplaced confidenceWe might assume current understanding appliesMediumLow
Compressed timelinesLess time to develop safety measuresVery HighVery High
Open-source proliferationNovel techniques spread faster than safety measuresHighHigh

Why They Might Be Better

Potential BenefitExplanationProbabilityExample
Designed for safetyNew approaches could prioritize interpretability15-25%Neuro-symbolic: 28% papers address explainability
Different incentivesMight emerge from safety-focused research10-20%Interpretability-first architectures
Better understandingNew paradigms might be more theoretically grounded20-30%Causal AI provides formal guarantees
Natural alignmentCould have built-in alignment properties5-15%Symbolic reasoning more auditable
Efficiency enables safetyMore compute for alignment research25-35%If 10x more efficient, more safety testing possible

Safety Research Transferability by Paradigm

Current Safety Research AreaNeuro-SymbolicSSMsNeuromorphicUnknown
InterpretabilityHigh transferMediumLowUnknown
RLHF/Constitutional AIMediumHighLowUnknown
Formal verificationVery HighMediumMediumUnknown
Scalable oversightMediumHighLowUnknown
Deceptive alignment detectionLowMediumLowUnknown

Research Questions

What Should We Monitor?

AreaWhat to WatchKey IndicatorsMonitoring Frequency
Academic MLNovel architectures, theoretical resultsArXiv papers, NeurIPS/ICML proceedingsWeekly
Industry labsUnpublished breakthroughsHiring patterns, patent filings, leaked benchmarksMonthly
InterdisciplinaryPhysics, neuroscience, mathematicsCross-disciplinary conferences, Nature/Science publicationsQuarterly
AI-for-AIAI systems discovering new AI methodsNAS/AutoML progress, AI-generated code qualityMonthly
Hardware developmentsNovel compute substratesChip announcements, energy efficiency benchmarksQuarterly
Scaling signalsEvidence of plateaus or breakthroughsEpoch AI tracking, benchmark progressContinuous

How to Prepare for Unknown Unknowns?

StrategyRationaleInvestment LevelPriority
General safety researchFocus on principles that transferHighCritical
Monitoring infrastructureTrack developments broadlyMediumHigh
Paradigm-agnostic alignmentDon't overfit to transformer-specific approachesHighCritical
Worst-case planningAssume capabilities might jump unexpectedlyMediumHigh
Rapid response capacityAbility to pivot safety research quicklyMediumMedium
Diverse research portfolioFund safety research across multiple paradigmsHighHigh

Key Monitoring Organizations

OrganizationFocusUpdate FrequencyURL
Epoch AICompute trends, scaling analysisWeeklyepoch.ai
LEAP PanelExpert forecasts on AI developmentMonthlyforecastingresearch.org
AI Index (Stanford HAI)Comprehensive AI metricsAnnualhai.stanford.edu
MetaculusPrediction markets on AI timelinesContinuousmetaculus.com
80,000 HoursAI safety career/research prioritiesQuarterly80000hours.org

Bayesian Reasoning

How to Update

ObservationUpdate DirectionMagnitudeCurrent Signal (2025)
Transformers continue scalingNovel approaches less likely near-term-3 to -5%5x/year growth continuing
Hard ceiling hitNovel approaches more likely+10 to +20%Not yet observed
Data exhaustionNovel approaches more likely+5 to +10%2028 estimate approaching
Theoretical breakthroughPay attention to specific directionVariableNeuro-symbolic momentum
AI discovers better architectureAccelerates unknown-unknown risk+5 to +15%NAS producing competitive models
Major lab pivots to new approachStrong signal+15 to +25%Not observed

Probability Estimate by Timeline

TimeframeProbability of Novel Paradigm DominanceKey AssumptionsConfidence
By 20271-3%Current scaling continues; no major breakthroughsMedium
By 20305-12%Data/compute limits start binding; research progressesMedium
By 203510-20%Current paradigm hits fundamental limitsLow
By 204015-30%Long timeline allows paradigm maturationLow
By 2050+25-45%Historical base rate of paradigm shiftsVery Low

Why 1-15% Range Is Reasonable

The range reflects uncertainty about timelines and paradigm persistence:

Lower bound (1%): If transformative AI arrives within 3-5 years via current paradigm scaling, novel approaches have insufficient time to mature. The median Metaculus estimate of AGI by ~2027 supports this scenario.

Upper bound (15%): If current paradigm hits hard limits (data exhaustion, scaling saturation) before transformative AI, alternative approaches become necessary. Epoch AI projections of 2028 data exhaustion support this possibility.

Central estimate (5-8%): Accounts for historical base rate of paradigm shifts (~1 per decade in computing), current research momentum in alternatives, and uncertainty in both timelines and scaling projections.

Critical Questions

UncertaintyScenariosCurrent EvidenceResolution Timeline
How locked-in is the current paradigm?Fundamental (like the wheel) vs. Transitional (like vacuum tubes)Transformer dominance 7+ years suggests maturity2-5 years
How much does understanding matter?Empirical scaling sufficient vs. Theory needed for next leapDeep learning theory still immatureUnclear
Will AI-discovered AI come before TAI?Yes (accelerates) vs. No (current paradigm dominates)NAS producing competitive models2-4 years
How would we recognize a breakthrough?Clear benchmark jump vs. Gradual realizationHistorical: transformers looked incremental initiallyRetroactive
What are the true scaling limits?Near current frontier vs. Orders of magnitude remainingEpoch: 2e29 FLOP feasible by 20303-5 years
Will safety concerns force paradigm change?Interpretability needs drive alternatives vs. Current approaches adapted28% of neuro-symbolic papers address explainabilityOngoing

Scenario Analysis

ScenarioProbabilityKey TriggerImplications for Safety
Transformer dominance continues55-70%Scaling continues working; no hard limitsCurrent safety research remains relevant
Hybrid integration (Transformer + Neuro-symbolic)15-25%Reasoning limitations drive integrationSafety approaches must span paradigms
Gradual SSM/alternative transition5-12%Efficiency requirements dominateModerate adaptation of safety research
Discontinuous breakthrough3-8%Fundamentally new approach discoveredMajor safety research pivot required
AI-designed paradigm5-10%NAS/AutoML produces novel architectureAccelerated timeline; compressed safety window

Sources & Resources

Primary Research

SourceTypeKey FindingYear
Epoch AI: Can AI Scaling Continue?Analysis2e29 FLOP runs feasible by 2030; data exhaustion ≈20282024
Neuro-Symbolic AI 2024 Systematic ReviewSurvey63% papers on learning/inference; 5% on meta-cognition2024
LEAP Expert PanelForecastsExperts underestimate AI progress on benchmarks2024
80,000 Hours: AGI Timeline ReviewAnalysisMetaculus median shifted from 50 years to 5 years (2020-2024)2025
NAS Systematic ReviewSurveyNAS producing architectures matching human designs2024

Paradigm Shift Analysis

SourceFocusRelevance
Paradigm Shifts in Tech (Medium)Historical patternsTechnologies build upon predecessors
AI Paradigm Analysis (Taylor & Francis)AI as paradigm shiftPattern similarity to historical tech revolutions
Neuro-Symbolic AI OverviewThird AI waveHybrid approaches as potential successor
AllianceBernstein: AI Paradigm ShiftInvestment perspectiveParadigm shift timing uncertainty

Forecasting and Uncertainty

SourceFocusKey Insight
Our World in Data: AI TimelinesExpert surveys13-year shift in AGI estimates (2022-2023 surveys)
The Problem with AGI PredictionsPrediction failuresExperts often wrong about own field
Clearerthinking: AI DisagreementMethodologySources of forecasting disagreement
Science: AI and Unknown UnknownsUncertaintyEven experts struggle to predict 6 months ahead

Technical Directions

SourceFocusStatus
Neural Architecture Search Advances (NSR)AutoML/NASAI designing AI architectures
Google 2025 Research BreakthroughsIndustry progressQuantum, weather, scientific applications
FTI Consulting: AI Frontiers 2025Research directionsAgentic AI, multimodal, reasoning
Neuro-symbolic for Robustness (Springer)Hybrid approachesInterpretability, uncertainty quantification

References

180,000 Hours AGI Timelines Review80,000 Hours·Benjamin Todd·2025

A comprehensive synthesis by 80,000 Hours reviewing expert predictions on AGI timelines from multiple groups including AI lab leaders, researchers, and forecasters. The review finds a notable convergence toward shorter timelines, with many estimates suggesting AGI could arrive before 2030. Different expert communities that previously disagreed are now showing increasingly similar estimates.

★★★☆☆

An Our World in Data explainer synthesizing expert survey evidence on when human-level AI might be developed, drawing primarily on Katja Grace et al.'s 2022 survey of 356 AI researchers. The piece highlights wide disagreement among experts, with half predicting human-level AI before 2061 and 90% within 100 years, while cautioning against over-reliance on expert forecasts.

★★★★☆
3Systematic review on neural architecture searchSpringer (peer-reviewed)·Sasan Salmani Pour Avval, Vahid Yaghoubi, Nathan D. Eskue & Roger M. Groves·2024
★★★★☆

Epoch AI analyzes the key constraints and bottlenecks that could limit continued AI scaling through 2030, examining factors such as compute availability, energy infrastructure, data availability, and algorithmic progress. The analysis assesses whether current scaling trends in large language models and other AI systems can realistically be sustained over the next several years.

★★★★☆
5Advances in neural architecture searchOxford Academic (peer-reviewed)

This academic survey reviews progress in Neural Architecture Search (NAS), covering automated methods for designing neural network architectures. It examines search strategies, performance estimation techniques, and applications across various domains, highlighting how NAS enables automated discovery of architectures that rival or surpass hand-designed models.

★★★★★

Google's annual research review by Jeff Dean, Demis Hassabis, and James Manyika surveys 2025 breakthroughs across eight domains including AI agents, reasoning, multimodality, and scientific discovery. The post highlights advances in Gemini and Gemma model families and their applications to science, robotics, and global challenges, framed within Google's responsible AI development priorities.

★★★★☆

An overview of Neural Architecture Search (NAS), a subfield of AutoML that automates the design of neural network architectures. It covers the key methods, search spaces, and optimization strategies used to automatically discover high-performing architectures, reducing the need for manual human design.

Epoch AI is a research organization focused on tracking and analyzing trends in AI development, including training compute, model capabilities, and the trajectory of AI progress. They produce datasets, forecasts, and analyses that inform understanding of how quickly AI capabilities are advancing and what resources are required. Their work is widely cited in AI safety and policy discussions.

★★★★☆

Metaculus is a collaborative online forecasting platform where users make probabilistic predictions on future events across domains including AI development, biosecurity, and global catastrophic risks. It aggregates crowd wisdom and expert forecasts to produce calibrated probability estimates on complex questions relevant to long-term planning and existential risk assessment.

★★★☆☆

80,000 Hours is a nonprofit that provides research and advice on how to use your career to have the most positive impact on the world's most pressing problems, with significant focus on AI safety and existential risk. They offer career guides, job boards, and in-depth research on high-priority cause areas and career paths. Their methodology emphasizes earning to give, direct work in high-impact fields, and building career capital.

★★★☆☆

Related Wiki Pages

Top Related Pages