Back
Why work at AI Impacts? - AI Impacts
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: AI Impacts
A 2022 blog post by AI Impacts founder Katja Grace articulating the strategic rationale behind AI Impacts' research-first approach, useful for understanding the organization's philosophy and how it situates itself within the broader AI safety ecosystem.
Metadata
Importance: 35/100homepagecommentary
Summary
Katja Grace explains her reasoning for why AI Impacts is a high-impact place to work, covering the organization's mission as a research library on AI futures, and arguing that 'understanding the situation' around AI risk is currently more valuable on the margin than direct technical or governance interventions.
Key Points
- •AI Impacts maintains a hierarchical library of best-guess answers to questions about AI futures, from high-level existential questions down to tractable sub-questions.
- •Grace argues AI risk is a top cause area even under uncertainty, and that demonstrating it's not severe could redirect effort to other important problems.
- •The core thesis is that 'understanding the situation' is more valuable on the margin than additional technical safety or governance intervention work.
- •AI Impacts functions as research group, blog, and community hub for researchers interested in forecasting and analyzing AI development trajectories.
- •The post reflects a personal perspective from the founder rather than an official organizational statement.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Impacts | Organization | 53.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202627 KB
_Katja Grace, 6 March 2022_
AI Impacts is beginning a serious hiring round (see [here](https://aiimpacts.org/jobs/) for job postings), so I’d like to explain a bit why it has been my own best guess at the highest impact place for me to work for me. (As in, this is a personal blog post by Katja on the AI Impacts blog, not some kind of officialesque missive from the organization.)
But first—
## What is AI Impacts?
AI Impacts is a few things:
1. **An online library of best-guess answers to questions about the future of AI.** Including [big questions](https://aiimpacts.org/ai-impacts-key-questions-of-interest/), like ‘ [how likely is a sudden jump in AI progress at around human-level performance?](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/)’, and sub-questions informing those answers (‘ [are discontinuities common in technological trends?](https://aiimpacts.org/discontinuous-progress-investigation/)’), and sub-sub questions (‘ [did penicillin cause any discontinuous changes in syphilis trends?](https://aiimpacts.org/penicillin-and-historic-syphilis-trends/)’), and so on. Each page ideally has a high-level conclusion at the top, and reasoning supporting it below, which will often call on the conclusions of other pages. These form something like a set of trees, with important, hard, decision-relevant questions at the root and low-level, tractable, harder-to-use-on-their-own questions at the leaves. This isn’t super obvious at the moment, because a lot of the trees are very incomplete, but that’s the basic idea.
2. **A [research group](https://aiimpacts.org/about/) focused on finding such answers**, through a mixture of original research and gathering up that which has been researched by others.
3. **A [blog](https://aiimpacts.org/category/blog/) on these topics**, for more opinionated takes, conversational guides to the research, updates, and other things that don’t fit in the main library (like this!).
4. **A locus of events** for people interested in this kind of research, e.g. dinners and [workshops](https://aiimpacts.org/vignettes-workshop/), a Slack with other researchers, online coffees.
## Why think working on AI Impacts is among the best things to do?
### 1\. AI risk looks like a top-notch cause area
It seems plausible to me that advanced AI poses a substantial risk to humanity’s survival. I don’t think this is clear, but I do think there’s enough evidence that it warrants a lot of attention. I hope to write more about this, see [here](https://www.lesswrong.com/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai) for recent discussion. Furthermore, I don’t know of other similarly serious risks (see Ord’s _[The Precipice](https://theprecipice.com/)_ for a review), or of other intervention areas that look clearly more valuable than reducing existential risk to humanity.
I actually also think AI risk is a potentially high-impact area to work (for a little while at l
... (truncated, 27 KB total)Resource ID:
3f27a8a39aa8dbd3 | Stable ID: ZjVlNjcwYj