CFAR 2017 Impact Report
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Center for Applied Rationality
CFAR (Center for Applied Rationality) is an organization closely affiliated with the AI safety community that trains researchers and EA-aligned individuals in rationality skills; this annual report documents their 2017 activities and is primarily useful for understanding the ecosystem supporting AI safety human capital development.
Metadata
Summary
This is the Center for Applied Rationality's 2017 annual impact report, documenting CFAR's activities, outcomes, and mission progress over the year. CFAR focuses on developing rationality training to help individuals—particularly those working on existential risk and AI safety—make better decisions and reason more effectively. The report likely covers workshop attendance, curriculum updates, and evidence of participant impact in high-stakes domains.
Key Points
- •CFAR trains individuals in applied rationality techniques aimed at improving decision-making in high-stakes domains including AI safety research
- •The report documents organizational growth, workshop reach, and participant outcomes for the 2017 calendar year
- •CFAR explicitly targets its programs toward people working on existential risk reduction, including AI alignment researchers
- •Impact metrics likely include participant surveys, follow-up assessments, and qualitative accounts of behavior change
- •CFAR operates as a nonprofit supporting the broader EA and AI safety ecosystem through human capital development
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Center for Applied Rationality | Organization | 62.0 |
Cached Content Preview
# Intro
CFAR was founded with a [two-part vision](http://lesswrong.com/lw/jej/why_cfar/): to develop and train people in the art
of rationality, and to have some of those people then be more likely to work effectively on the world’s most important
problems. [Last](http://lesswrong.com/lw/o7o/cfars_new_focus_and_ai_safety/) [year](http://lesswrong.com/r/lesswrong/lw/o9h/further_discussion_of_cfars_focus_on_ai_safety/), we decided to focus
especially on our alums’ impact on the problem of AI risk.
The question of how well CFAR is succeeding at its mission can be broken down into two subquestions:
- What effect does CFAR have on people?
- What effect do those people have on the world, and in particular on reducing AI risk, that they wouldn’t otherwise
have had were it not for their interaction with CFAR?
This post describes how we have been thinking about the second of these two questions, which we have been especially
focused on over the past year.
A relatively straightforward way to approach this question is to look at what our alumni are up to, as we have done
informally throughout CFAR’s history. However, this still leaves tricky questions to answer: how to assess the size of
their impact on the world, how to guess at the counterfactual of what they would have been doing without CFAR’s
influence, and how to pinpoint which aspects of their involvement with CFAR made a difference.
Over the past year and a half we have tried to look more systematically at what our alumni are up to, and to put more
effort into investigating these tricky questions. We have done this, as described in more detail below, by evaluating
the results of an impact survey sent to all CFAR alumni, and by conducting interviews with some alums who seem to us to
be doing especially high-impact work.
# Alumni Impact Survey
In May 2016, we set out to count the number of alumni who have had an increase in expected impact due to their
involvement with CFAR by sending out a survey to our alumni. This survey asked them whether or not they thought their
positive impact on the world had increased as a result of their interactions with CFAR, and (if so) what they were doing
differently and how CFAR had affected the change. In June 2017 we sent out an updated version of the survey which had
more prompts for text responses.
For each person’s responses, we manually coded whether it seemed like 1) their current path was high-impact, 2) their
current path was substantially better than their old path, and 3) CFAR played a significant role in this change. We
counted someone as having an “increase in expected impact” (IEI) if they met all three criteria.
On the first criterion, a path could count as “high-impact” based on expected future impact rather than past impact, as
long as the person seemed to be taking concrete steps towards having an impact. For donations, the minimum bar for what
counted as “substantial” or “significant” was $5,000 to effective charities (or plans to give 5% o
... (truncated, 23 KB total)ec32c9bb21de9e0f | Stable ID: ZDFiNjI5MG