QualityAdequateQuality: 53/100Human-assigned rating of overall page quality, considering depth, accuracy, and completeness.Structure suggests 93
62
ImportanceUsefulImportance: 62/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.
14
Structure14/15Structure: 14/15Automated score based on measurable content features.Word count2/2Tables3/3Diagrams2/2Internal links1/2Citations3/3Prose ratio2/2Overview section1/1
26TablesData tables in the page2DiagramsCharts and visual diagrams2Internal LinksLinks to other wiki pages0FootnotesFootnote citations [^N] with sources79External LinksMarkdown links to outside URLs%0%Bullet RatioPercentage of content in bullet lists
Analyzes probability (1-15%) of novel AI paradigms emerging before transformative AI, systematically reviewing historical prediction failures (expert AGI timelines shifted 43 years in 4 years, 13 years in one survey cycle) and comparing alternative approaches like neuro-symbolic (8-15% probability), SSMs (5-12%), and NAS (15-30%). Concludes current paradigm faces quantified limits (data exhaustion ~2028, compute costs approaching economic constraints) but near-term timelines favor incumbent approaches.
Issues2
QualityRated 53 but structure suggests 93 (underrated by 40 points)
Links29 links could use <R> components
Novel / Unknown Approaches
Capability
Novel / Unknown Approaches
Analyzes probability (1-15%) of novel AI paradigms emerging before transformative AI, systematically reviewing historical prediction failures (expert AGI timelines shifted 43 years in 4 years, 13 years in one survey cycle) and comparing alternative approaches like neuro-symbolic (8-15% probability), SSMs (5-12%), and NAS (15-30%). Concludes current paradigm faces quantified limits (data exhaustion ~2028, compute costs approaching economic constraints) but near-term timelines favor incumbent approaches.
This category represents the probability mass we should assign to approaches not yet discovered or not included in our current taxonomy. History shows that transformative technologies often come from unexpected directions, and intellectual humility requires acknowledging this. The field of AI has undergone cyclical periods of growth and decline, known as AI summers and winters, with each cycle bringing unexpected architectural innovations. We are currently in the third AI summer, characterized by the transformer paradigm, but historical patterns suggest eventual disruption.
The challenge of forecasting AI development is well-documented. According to 80,000 Hours' analysis of expert forecasts, mean estimates on MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100 for when AGI will be developed plummeted from 50 years to 5 years between 2020 and 2024. The AI Impacts 2023 survey found machine learning researchers expected AGI by 2047, compared to 2060 in the 2022 survey. This 13-year shift in a single year demonstrates the difficulty of prediction in this domain.
Beyond the "known unknowns" such as scaling limits and alignment challenges, we face a vast terrain of "unknown unknowns": emergent capabilitiesRiskEmergent CapabilitiesEmergent capabilities—abilities appearing suddenly at scale without explicit training—pose high unpredictability risks. Wei et al. documented 137 emergent abilities; recent models show step-functio...Quality: 61/100, unforeseen risks, and transformative shifts that defy prediction. The technology itself is evolving so rapidly that even experts struggle to predict its capabilities 6 months ahead.
Estimated probability of being dominant at transformative AI: 1-15% (range reflects timeline uncertainty; shorter timelines favor current paradigms, longer timelines favor novel approaches)
The history of technology provides crucial context for estimating the probability of paradigm shifts. As documented by research on technological paradigm shifts, notable figures consistently fail to predict transformative changes. Wilbur Wright famously said in 1901 that "man would not fly for 50 years"; two years later, he and his brother achieved flight.
A paradigm shift in AI development would have profound implications for AI safety research. The Stanford HAI AI Index 2025 notes that safety research investment trails capability investment by approximately 10:1. A novel paradigm could either invalidate existing safety research or provide new opportunities for alignment.
Why Novel Approaches Are Concerning
Concern
Explanation
Risk Level
Mitigation Difficulty
Unpredictability
Can't prepare for unknown risks
High
Very High
Rapid capability jumps
New paradigm might be much more capable
Very High
High
Different failure modes
Safety research might not transfer
High
Medium
Misplaced confidence
We might assume current understanding applies
Medium
Low
Compressed timelines
Less time to develop safety measures
Very High
Very High
Open-source proliferation
Novel techniques spread faster than safety measures
Data/compute limits start binding; research progresses
Medium
By 2035
10-20%
Current paradigm hits fundamental limits
Low
By 2040
15-30%
Long timeline allows paradigm maturation
Low
By 2050+
25-45%
Historical base rate of paradigm shifts
Very Low
Why 1-15% Range Is Reasonable
The range reflects uncertainty about timelines and paradigm persistence:
Lower bound (1%): If transformative AI arrives within 3-5 years via current paradigm scaling, novel approaches have insufficient time to mature. The median Metaculus estimate of AGI by ~2027 supports this scenario.
Upper bound (15%): If current paradigm hits hard limits (data exhaustion, scaling saturation) before transformative AI, alternative approaches become necessary. Epoch AI projections of 2028 data exhaustion support this possibility.
Central estimate (5-8%): Accounts for historical base rate of paradigm shifts (~1 per decade in computing), current research momentum in alternatives, and uncertainty in both timelines and scaling projections.
Critical Questions
Uncertainty
Scenarios
Current Evidence
Resolution Timeline
How locked-in is the current paradigm?
Fundamental (like the wheel) vs. Transitional (like vacuum tubes)