AI Impacts
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: AI Impacts
AI Impacts is a key empirical research hub for AI safety; its expert surveys and wiki pages are frequently cited in discussions about AI timelines, risk probability, and strategic forecasting within the broader AI safety community.
Metadata
Summary
AI Impacts is a research organization that investigates empirical questions relevant to AI forecasting and safety, including AI timelines, discontinuous progress risks, and existential risk arguments. It maintains a wiki and blog featuring expert surveys, historical analyses, and structured arguments about transformative AI development. Notable outputs include periodic expert surveys on AI progress timelines.
Key Points
- •Hosts the 2023 Expert Survey on Progress in AI, a large-scale survey of 2778 ML researchers on timelines and risk
- •Maintains a wiki covering AI existential risk arguments, timeline predictions, and responses to AI development
- •Analyzes likelihood of discontinuous progress around AGI development using historical technology analogies
- •Investigates whether transformative AI technologies are inevitable, including cases of incentivized technologies not pursued
- •Affiliated with FLI, MIRI, and FHI, representing a core node in the AI safety research ecosystem
Cited by 8 pages
| Page | Type | Quality |
|---|---|---|
| AGI Timeline | Concept | 59.0 |
| AI Risk Portfolio Analysis | Analysis | 64.0 |
| AI Compounding Risks Analysis Model | Analysis | 60.0 |
| Deceptive Alignment Decomposition Model | Analysis | 62.0 |
| International AI Coordination Game Model | Analysis | 59.0 |
| AI Safety Research Value Model | Analysis | 60.0 |
| AI Impacts | Organization | 53.0 |
| Epoch AI | Organization | 51.0 |
Cached Content Preview
## [Wiki](https://wiki.aiimpacts.org/)
[](https://wiki.aiimpacts.org/)
The wiki replaces old AI Impacts pages. Old pages are still up, but for up to date content see [**wiki**](https://wiki.aiimpacts.org/doku.php?id=start) versions.
## Featured wiki pages
[](https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai) 2023 Expert Survey on Progress in AI[](https://wiki.aiimpacts.org/doku.php?id=featured_articles:fiction_relevant_to_ai_futurism) Fiction relevant to AI futurism[](https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start) Is AI an existential risk to humanity?[](https://wiki.aiimpacts.org/doku.php?id=featured_articles:likelihood_of_discontinuous_progress_around_the_development_of_agi) Likelihood of discontinuous progress around the development of AGI[](https://wiki.aiimpacts.org/doku.php?id=responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:start) Incentivized technologies not pursued
## [Blog](https://blog.aiimpacts.org/)
[](https://blog.aiimpacts.org/p/2023-ai-survey-of-2778-six-things)
- [FAQ: Expert Survey on Progress in AI methodology](https://blog.aiimpacts.org/p/faq-expert-survey-on-progress-in)
2025-10-31by Katja Grace
Context
- [Reanalyzing the 2023 Expert Survey on Progress in AI](https://blog.aiimpacts.org/p/reanalyzing-the-2023-expert-survey)
2024-12-15by Ben Weinstein-Raun
With new charts, and a newly open-source codebase
- [Winners of the Essay competition on the Automation of Wisdom and Philosophy](https://blog.aiimpacts.org/p/winners-of-the-essay-competition)
2024-10-28by Owen Cotton-Barratt
We’re delighted to announce the winners of the Essay competition on the Automation of Wisdom and Philosophy.
## Search the wiki, the blog and site:
| | | |
| --- | --- | --- |
| | | |
| --- | --- |
| | × | | search | |
Custom Search
| | |
| --- | --- |
| | Sort by<br>Relevance<br>Date |
[](https://www.fathomcomputing.com/)[](https://futureoflife.org/)[3b9fda03b8be71dc | Stable ID: ZjdjZTg4Zm