Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Center for Applied Rationality

CFAR (Center for Applied Rationality) runs workshops aimed at improving reasoning skills, particularly among people working on AI safety and existential risk; these 2016 case studies document participant outcomes and are relevant context for understanding the rationality-adjacent AI safety community.

Metadata

Importance: 28/100organizational reportprimary source

Summary

A collection of case studies from the Center for Applied Rationality (CFAR) documenting how participants applied rationality training techniques to real-world problems in 2016. The studies illustrate practical outcomes of CFAR's workshop curriculum, showing how improved reasoning and decision-making skills affect participants' personal and professional lives. This resource serves as qualitative evidence for the effectiveness of rationality training in the AI safety and effective altruism communities.

Key Points

  • Documents real-world applications of CFAR's rationality techniques by workshop alumni across various life domains.
  • Provides qualitative evidence for the impact of rationality training on decision-making, career choices, and problem-solving.
  • Relevant to the AI safety community as CFAR historically focused on improving reasoning among people working on existential risks.
  • Case studies illustrate how cognitive tools like goal factoring, internal double crux, and TAPs are applied in practice.
  • Serves as outreach and impact documentation for CFAR's mission to improve human rationality for high-stakes outcomes.

Cited by 1 page

PageTypeQuality
Center for Applied RationalityOrganization62.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202630 KB
#### Content

- [Organization Case Studies](https://www.rationality.org/studies/2016-case-studies#organization-case-studies)
  - [MIRI](https://www.rationality.org/studies/2016-case-studies#miri)
  - [FLI](https://www.rationality.org/studies/2016-case-studies#fli)
  - [Arbital](https://www.rationality.org/studies/2016-case-studies#arbital)
- [Individual Case Studies](https://www.rationality.org/studies/2016-case-studies#individual-case-studies)
  - [Victoria Krakovna](https://www.rationality.org/studies/2016-case-studies#victoria-krakovna)
  - [Peter McIntyre](https://www.rationality.org/studies/2016-case-studies#peter-mcintyre)
  - [Stefan Schubert](https://www.rationality.org/studies/2016-case-studies#stefan-schubert)
  - [Ben Hoffman](https://www.rationality.org/studies/2016-case-studies#ben-hoffman)

**January 2017**

One way to look at what CFAR has done is to look at our alumni. What are CFAR alumni doing, and how do they think that their involvement with CFAR has influenced their path?

For these case studies, we have chosen to focus on alumni who are doing work that is relevant to existential risk, especially in the domain of AI safety, with an emphasis on how their interactions with CFAR and the surrounding community have influenced their path. We attempt to compare their actual path with guesses at how things would have gone in the counterfactual world where they did not become involved with CFAR.

The case studies below are divided into two sections: organizations in which CFAR alumni have played a large role and individual CFAR alumni. They include case studies of people who are working directly on existential risk, as well as case studies of people who are doing work in effective altruism more broadly which we consider to be relevant to existential risk (along with other cause areas).

# Organization Case Studies

## MIRI

The [Machine Intelligence Research Institute](https://intelligence.org/) is one of the main organizations where technical research on AI safety takes place. MIRI narrowed its mission in 2013 to focus on technical research and currently has 6 research fellows, with several other people also contributing to its research as well as a general staff that helps make the organization run.

Most of MIRI’s staff have attended a CFAR workshop, and many attribute some role to CFAR in the path that led them to MIRI or in increasing their effectiveness. MIRI’s executive director Nate Soares estimates that the personal efficacy differences staff have gained from their interactions with CFAR account for 5-10% of the total output of MIRI research. For example, he thinks that the CFAR content on [double crux](http://lesswrong.com/lw/o6p/double_crux_a_strategy_for_resolving_disagreement/) has helped MIRI researchers collaborate more effectively. Nate considers MIRI’s research team to have been heavily talent constrained from 2013-16, in that if one of their researchers hadn’t joined MIRI, then MIRI would most likely have had a small

... (truncated, 30 KB total)
Resource ID: e78144b0339b1c8b | Stable ID: YWJlMWFjMW