Edited 2 months ago3.3k words1 backlinksUpdated every 6 weeksOverdue by 22 days
53QualityAdequate •Quality: 53/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.Structure suggests 9343.5ImportanceReferenceImportance: 43.5/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.64.5ResearchModerateResearch Value: 64.5/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Content8/13
SummarySummaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.EntityEntityYAML entity definition with type, description, and related entries.Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
Tables26/ ~13TablesData tables for structured comparisons and reference material.Diagrams2/ ~1DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.–Int. links2/ ~27Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Add links to other wiki pagesExt. links79/ ~17Ext. linksLinks to external websites, papers, and resources outside the wiki.Footnotes0/ ~10FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences10/ ~10ReferencesCurated external resources linked via <R> components or cited_by in YAML.Quotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:4.5 R:5.8 A:4.2 C:6.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).Backlinks1BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Issues3
QualityRated 53 but structure suggests 93 (underrated by 40 points)
Links29 links could use <R> components
StaleLast edited 67 days ago - may need review
Novel / Unknown Approaches
Capability
Novel / Unknown Approaches
Analyzes probability (1-15%) of novel AI paradigms emerging before transformative AI, systematically reviewing historical prediction failures (expert AGI timelines shifted 43 years in 4 years, 13 years in one survey cycle) and comparing alternative approaches like neuro-symbolic (8-15% probability), SSMs (5-12%), and NAS (15-30%). Concludes current paradigm faces quantified limits (data exhaustion ~2028, compute costs approaching economic constraints) but near-term timelines favor incumbent approaches.
This category represents the probability mass we should assign to approaches not yet discovered or not included in our current taxonomy. History shows that transformative technologies often come from unexpected directions, and intellectual humility requires acknowledging this. The field of AI has undergone cyclical periods of growth and decline, known as AI summers and winters, with each cycle bringing unexpected architectural innovations. We are currently in the third AI summer, characterized by the transformer paradigm, but historical patterns suggest eventual disruption.
The challenge of forecasting AI development is well-documented. According to 80,000 Hours' analysis of expert forecasts, mean estimates on MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100 for when AGI will be developed plummeted from 50 years to 5 years between 2020 and 2024. The AI Impacts 2023 survey found machine learning researchers expected AGI by 2047, compared to 2060 in the 2022 survey. This 13-year shift in a single year demonstrates the difficulty of prediction in this domain.
Beyond the "known unknowns" such as scaling limits and alignment challenges, we face a vast terrain of "unknown unknowns": emergent capabilitiesRiskEmergent CapabilitiesEmergent capabilities—abilities appearing suddenly at scale without explicit training—pose high unpredictability risks. Wei et al. documented 137 emergent abilities; recent models show step-functio...Quality: 61/100, unforeseen risks, and transformative shifts that defy prediction. The technology itself is evolving so rapidly that even experts struggle to predict its capabilities 6 months ahead.
Estimated probability of being dominant at transformative AI: 1-15% (range reflects timeline uncertainty; shorter timelines favor current paradigms, longer timelines favor novel approaches)
Why Include This Category
Diagram (loading…)
flowchart TB
subgraph known["Known Approaches"]
transformers["Transformers"]
moe["Sparse/MoE"]
ssm["SSMs"]
neuro["Neuromorphic"]
other["Other Known"]
end
subgraph unknown["Unknown Territory"]
notyet["Not Yet Discovered"]
overlooked["Overlooked Ideas"]
combinations["Novel Combinations"]
physics["New Physics?"]
end
known -->|"Sum to ≈85-99%"| total["Total Probability"]
unknown -->|"Residual 1-15%"| total
The history of technology provides crucial context for estimating the probability of paradigm shifts. As documented by research on technological paradigm shifts, notable figures consistently fail to predict transformative changes. Wilbur Wright famously said in 1901 that "man would not fly for 50 years"; two years later, he and his brother achieved flight.
A paradigm shift in AI development would have profound implications for AI safety research. The Stanford HAI AI Index 2025 notes that safety research investment trails capability investment by approximately 10:1. A novel paradigm could either invalidate existing safety research or provide new opportunities for alignment.
Why Novel Approaches Are Concerning
Concern
Explanation
Risk Level
Mitigation Difficulty
Unpredictability
Can't prepare for unknown risks
High
Very High
Rapid capability jumps
New paradigm might be much more capable
Very High
High
Different failure modes
Safety research might not transfer
High
Medium
Misplaced confidence
We might assume current understanding applies
Medium
Low
Compressed timelines
Less time to develop safety measures
Very High
Very High
Open-source proliferation
Novel techniques spread faster than safety measures
Data/compute limits start binding; research progresses
Medium
By 2035
10-20%
Current paradigm hits fundamental limits
Low
By 2040
15-30%
Long timeline allows paradigm maturation
Low
By 2050+
25-45%
Historical base rate of paradigm shifts
Very Low
Why 1-15% Range Is Reasonable
The range reflects uncertainty about timelines and paradigm persistence:
Lower bound (1%): If transformative AI arrives within 3-5 years via current paradigm scaling, novel approaches have insufficient time to mature. The median Metaculus estimate of AGI by ~2027 supports this scenario.
Upper bound (15%): If current paradigm hits hard limits (data exhaustion, scaling saturation) before transformative AI, alternative approaches become necessary. Epoch AI projections of 2028 data exhaustion support this possibility.
Central estimate (5-8%): Accounts for historical base rate of paradigm shifts (~1 per decade in computing), current research momentum in alternatives, and uncertainty in both timelines and scaling projections.
Critical Questions
Uncertainty
Scenarios
Current Evidence
Resolution Timeline
How locked-in is the current paradigm?
Fundamental (like the wheel) vs. Transitional (like vacuum tubes)
A comprehensive synthesis by 80,000 Hours reviewing expert predictions on AGI timelines from multiple groups including AI lab leaders, researchers, and forecasters. The review finds a notable convergence toward shorter timelines, with many estimates suggesting AGI could arrive before 2030. Different expert communities that previously disagreed are now showing increasingly similar estimates.
An Our World in Data explainer synthesizing expert survey evidence on when human-level AI might be developed, drawing primarily on Katja Grace et al.'s 2022 survey of 356 AI researchers. The piece highlights wide disagreement among experts, with half predicting human-level AI before 2061 and 90% within 100 years, while cautioning against over-reliance on expert forecasts.
Epoch AI analyzes the key constraints and bottlenecks that could limit continued AI scaling through 2030, examining factors such as compute availability, energy infrastructure, data availability, and algorithmic progress. The analysis assesses whether current scaling trends in large language models and other AI systems can realistically be sustained over the next several years.
This academic survey reviews progress in Neural Architecture Search (NAS), covering automated methods for designing neural network architectures. It examines search strategies, performance estimation techniques, and applications across various domains, highlighting how NAS enables automated discovery of architectures that rival or surpass hand-designed models.
Google's annual research review by Jeff Dean, Demis Hassabis, and James Manyika surveys 2025 breakthroughs across eight domains including AI agents, reasoning, multimodality, and scientific discovery. The post highlights advances in Gemini and Gemma model families and their applications to science, robotics, and global challenges, framed within Google's responsible AI development priorities.
An overview of Neural Architecture Search (NAS), a subfield of AutoML that automates the design of neural network architectures. It covers the key methods, search spaces, and optimization strategies used to automatically discover high-performing architectures, reducing the need for manual human design.
Epoch AI is a research organization focused on tracking and analyzing trends in AI development, including training compute, model capabilities, and the trajectory of AI progress. They produce datasets, forecasts, and analyses that inform understanding of how quickly AI capabilities are advancing and what resources are required. Their work is widely cited in AI safety and policy discussions.
Metaculus is a collaborative online forecasting platform where users make probabilistic predictions on future events across domains including AI development, biosecurity, and global catastrophic risks. It aggregates crowd wisdom and expert forecasts to produce calibrated probability estimates on complex questions relevant to long-term planning and existential risk assessment.
80,000 Hours is a nonprofit that provides research and advice on how to use your career to have the most positive impact on the world's most pressing problems, with significant focus on AI safety and existential risk. They offer career guides, job boards, and in-depth research on high-priority cause areas and career paths. Their methodology emphasizes earning to give, direct work in high-impact fields, and building career capital.