Back
On Deference and Yudkowsky's AI Risk Estimates
webAuthor
bmg
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
An EA Forum discussion post exploring the epistemics of deferring to Eliezer Yudkowsky's AI doom estimates, relevant to how the AI safety community forms and updates beliefs about existential risk probabilities.
Forum Post Details
Karma
291
Comments
194
Forum
eaforum
Forum Tags
AI safetyBuilding effective altruismForecastingEpistemic deferenceAI alignmentAI forecastingEliezer YudkowskyRisk assessmentCriticism and Red Teaming ContestCriticism of work in effective altruism
Metadata
Importance: 45/100blog postcommentary
Summary
This EA Forum post examines whether and how much to defer to Eliezer Yudkowsky's high probability estimates of AI-caused human extinction, exploring the epistemics of expert deference in AI safety contexts. It discusses the tension between independent reasoning and deferring to domain experts when assessing existential risk from advanced AI.
Key Points
- •Explores the epistemic question of how much weight to give Yudkowsky's high p(doom) estimates versus forming independent views
- •Discusses the philosophical challenges of deference: when to trust experts vs. when to reason independently about AI risk
- •Examines Yudkowsky's track record and credibility as a forecaster and AI safety researcher
- •Considers whether the EA community over- or under-defers to prominent figures on existential risk questions
- •Addresses the difficulty of calibrating beliefs about unprecedented, low-frequency catastrophic events
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Why Alignment Might Be Hard | Argument | 69.0 |
Cached Content Preview
HTTP 200Fetched Feb 23, 202698 KB
RejectAccept all
Hide table of contents
# [On Deference and Yudkowsky's AI RiskEstimates](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates)
by [bmg](https://forum.effectivealtruism.org/users/bmg?from=post_header)
Jun 19 202220 min read194
# 291
[AI safety](https://forum.effectivealtruism.org/topics/ai-safety)[Building effective altruism](https://forum.effectivealtruism.org/topics/building-effective-altruism)[Forecasting](https://forum.effectivealtruism.org/topics/forecasting)[Epistemic deference](https://forum.effectivealtruism.org/topics/epistemic-deference)[AI alignment](https://forum.effectivealtruism.org/topics/ai-alignment)[AI forecasting](https://forum.effectivealtruism.org/topics/ai-forecasting)[Eliezer Yudkowsky](https://forum.effectivealtruism.org/topics/eliezer-yudkowsky)[Risk assessment](https://forum.effectivealtruism.org/topics/risk-assessment)[Criticism and Red Teaming Contest](https://forum.effectivealtruism.org/topics/criticism-and-red-teaming-contest)[Criticism of work in effective altruism](https://forum.effectivealtruism.org/topics/criticism-of-work-in-effective-altruism) [Frontpage](https://forum.effectivealtruism.org/about#Finding_content)
Show all topics
[On Deference and Yudkowsky's AI Risk Estimates](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#)
[Introduction](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#Introduction)
[Why write this post?](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#Why_write_this_post_)
[Yudkowsky’s track record: some cherry-picked examples](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#Yudkowsky_s_track_record__some_cherry_picked_examples)
[Fairly clearcut examples](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#Fairly_clearcut_examples)
[1\. Predicting near-term extinction from nanotech](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#1__Predicting_near_term_extinction_from_nanotech)
[2\. Predicting that his team had a substantial chance of building AGI before 2010](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#2__Predicting_that_his_team_had_a_substantial_chance_of_building_AGI_before_2010)
[Somewhat disputable examples](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#Somewhat_disputable_examples)
[3\. Having high confidence that AI progress would be extremely discontinuous and localized and not require much compute](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#3__Having_high_confidence_th
... (truncated, 98 KB total)Resource ID:
e1fe34e189cc4c55 | Stable ID: MTJhNzNhOG