Skip to content
Longterm Wiki
Navigation
Updated 2026-02-03HistoryData
Citations verified9 accurate7 flagged18 unchecked
Page StatusContent
Edited 2 months ago3.4k words12 backlinksUpdated every 6 monthsDue in 17 weeks
62QualityGood •84.5ImportanceHigh34.5ResearchLow
Content7/13
SummaryScheduleEntityEdit history1Overview
Tables4/ ~13Diagrams0/ ~1Int. links16/ ~27Ext. links4/ ~17Footnotes34/ ~10References30/ ~10Quotes16/34Accuracy16/34RatingsN:2 R:6.5 A:1.5 C:7.5Backlinks12
Change History1
Surface tacticalValue in /wiki table and score 53 pages7 weeks ago

Added `tacticalValue` to `ExploreItem` interface, `getExploreItems()` mappings, the `/wiki` explore table (new sortable "Tact." column), and the card view sort dropdown. Scored 49 new pages with tactical values (4 were already scored), bringing total to 53.

sonnet-4 · ~30min

Issues3
QualityRated 62 but structure suggests 93 (underrated by 31 points)
Links3 links could use <R> components
StaleLast edited 61 days ago - may need review

Center for Applied Rationality

Lab

Center for Applied Rationality

Berkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-year follow-up. Received $3.5M+ from Coefficient Giving (formerly Open Philanthropy) and $5M from FTX (later clawed back); faced major controversies over abuse allegations handling and cult-like dynamics, now operating with 8 part-time staff after multi-year hiatus.

TypeLab
3.4k words · 12 backlinks

Quick Assessment

DimensionAssessment
TypeNonprofit organization (501(c)(3))
Founded2012
LocationBerkeley, California (now mostly remote)
Primary FocusTeaching rationality techniques through workshops and programs
Key PeopleAnna Salamon (President), Julia Galef, Andrew Critch, Michael Smith (co-founders)
Major FundersCoefficient Giving ($3.5M+), Survival and Flourishing Fund ($1.6M+)
StatusActive with reduced operations; resumed workshops in 2025 after hiatus
SourceLink
Official Websiterationality.org
Wikipediaen.wikipedia.org
LessWronglesswrong.com
EA Forumforum.effectivealtruism.org

Overview

The Center for Applied Rationality (CFAR) is a nonprofit organization founded in 2012 that develops and teaches techniques to improve epistemic and instrumental rationality through immersive workshops, coaching programs, and curriculum development. The organization emerged from the rationality movement around Eliezer Yudkowsky's LessWrong community and focuses on applying insights from cognitive science, behavioral economics, psychology, and decision theory to real-world decision-making and problem-solving.12

CFAR's stated mission is "Developing clear thinking for the sake of humanity's future," with particular emphasis on addressing challenges related to existential risks, including AI safety.3 The organization has trained over 1,300 workshop alumni through approximately 60 flagship workshops held between 2012 and early 2020, with participants reporting an average satisfaction rating of 9.2 out of 10.45 Beyond its core workshop programs, CFAR has contributed to the broader rationality and effective altruism communities through specialized programs, curriculum development, and connections to organizations working on AI alignment and existential risk reduction.

The organization has evolved significantly since its founding, transitioning from a full-time staff of approximately 12 to a mostly-remote operation with eight part-time curriculum developers and instructors as of September 2025.6 After a multi-year hiatus from regular programming, CFAR resumed mainline workshops in November 2025, testing new formats while continuing to refine its approach to teaching applied rationality.7

History and Founding

CFAR was founded in 2012 by Anna Salamon, Julia Galef, Michael Smith (also known as Valentine Smith), and Andrew Critch.18 The organization's origins trace to Anna Salamon's work at the Machine Intelligence Research Institute (MIRI) in 2011, where she began experimenting with teaching rationality techniques.9 CFAR initially functioned as an extension of MIRI before becoming an independent nonprofit organization.9

The founders shared a common observation that intelligence, education, and good intentions were insufficient to guarantee sound decision-making. According to co-founder Julia Galef, they were motivated by the realization that "being smart, and being well educated and even being really well intentioned was far from a guarantee from making what turned out to be really stupid decisions."9 All four founders brought strong backgrounds in mathematics, artificial intelligence, and science to the organization's development.

Early Development and Growth

CFAR officially began offering classes in 2012, developing a flagship workshop model that charged $3,900 for multi-day intensive programs.9 The organization held monthly workshops training diverse participants including scientists, police officers, teachers, entrepreneurs, and high school students.10 Workshop activity varied by year:11

  • 2013: 7 workshops
  • 2014: 9 workshops (peak year)
  • 2015: 4 workshops
  • 2017: 8 workshops (including specialized workshops for MIRI researchers and Effective Altruism Global participants)

Leadership Transitions

Julia Galef served as CFAR's initial president until 2016, when Anna Salamon assumed the role.1 Salamon has continued as president through at least 2022, with Jesse Liptrap and Michael Blume serving on the board of directors.1 The organization's advisors have included Paul Slovic and Keith Stanovich.1

Organizational Evolution

In 2019, CFAR spun off the European Summer Program on Rationality (ESPR) into a separate organization run by Jan Kulveit, marking a shift in how the organization managed related programs.11 By September 2025, CFAR had transitioned to a mostly-remote operation with a significantly reduced staff, theorizing that part-time work might help avoid organizational pitfalls while maintaining curriculum development quality.6

After a hiatus from regular programming beginning in early 2020, CFAR conducted an experimental mini-workshop in June 2025 at Arbor Summer Camp and resumed mainline workshops in November 2025, marking the beginning of a pilot program testing new workshop formats.712

Core Programs and Methodology

Workshop Model

CFAR's primary educational delivery mechanism has been immersive multi-day workshops, typically running 4.5 days and costing $3,900 to $4,000.413 These workshops emphasize three core pillars: applying critical thinking to real-world problems, having students practice skills rather than merely learn concepts, and building lasting habits.10

The organization has offered scholarship programs, including funding from Jaan Tallinn (Skype co-founder) for Estonian students.1 CFAR has also provided specialized training for organizations including Facebook and participants in the Thiel Fellowship.1

Rationality Techniques

CFAR develops and refines rationality techniques by synthesizing insights from cognitive science, psychology, neuroscience, behavioral economics, mathematics, statistics, and game theory.14 The organization conducts empirical testing of techniques and invents new methods when academic research proves insufficient for practical application.14

Key techniques taught at CFAR workshops include:1415

TechniquePurpose
Double CruxImproves collaboration and conceptualizing research questions
Goal FactoringAddresses mismatches between goals and plans
FocusingModels second-to-second cognition
Resolve CyclesBuilds motivation and action
Murphy-JitsuPrepares for obstacles through pre-mortems
Trigger-Action PlansImproves habit formation and research practices
Urge PropagationEnhances understanding of motivation and decision-making

According to CFAR, these techniques aim to bridge System 1 and System 2 cognition, helping participants work with emotions, reframe problems, and understand the modular nature of the mind.16

Specialized Programs

Beyond flagship workshops, CFAR has developed targeted programs for specific communities:

  • SPARC (Summer Program on Applied Rationality and Cognition): Annual summer programs, including events in 2019 and funding from Coefficient Giving ($304,000 over two years in 2016).11
  • MIRI-focused workshops: Specialized sessions for AI safety researchers, including the 2015 MIRI Summer Fellows Program (a 3-week training program for AI safety researchers).14
  • Alumni support: Six-week official follow-up programs and ongoing coaching/mentorship for workshop graduates.17

Reported Impact and Effectiveness

Self-Reported Outcomes

CFAR has collected extensive survey data from workshop participants, reporting several positive outcomes:14

MetricResultTimeframe
Neuroticism decreaseStatistically significantPost-workshop
Self-efficacy increaseMarked (though less significant than neuroticism decrease)Post-workshop
Life satisfaction increase0.17σ1 year after participation
Overall satisfaction9.2/10 average ratingPost-workshop

The organization's 2017 Impact Report indicated that alumni reported higher impact through better motivation navigation and clearer thinking.18 However, these results are primarily based on internal surveys without independent replication or external validation.19

Contributions to AI Safety and Effective Altruism

CFAR has documented case studies of alumni contributions to existential risk reduction and AI safety work. Notable examples include:20

  • Scott Garrabrant: Joined MIRI in 2015 after participating in CFAR/MIRI programs; contributed to logical induction research
  • Nate Soares: MIRI researcher who credited techniques like Double Crux with improving research collaboration
  • Ben Hoffman: Contributed to effective altruism and rationality writing, as well as AI risk modeling

The organization has positioned itself as contributing to the social infrastructure and individual skill development of communities working on AI safety and existential risk reduction, though direct causal impact is difficult to attribute.6

Limitations and Criticisms of Impact Claims

CFAR's impact evidence relies heavily on self-reported participant surveys and case studies rather than rigorous experimental design with control groups. A 2016 article in VICE noted that while CFAR's workshops had measurable effects on participants, the organization's approach had both strengths and flaws.1 The lack of recent external evaluations and reliance on internal data collection methods limit the strength of effectiveness claims.19

Funding and Financial Structure

Major Funding Sources

As of June 2022, CFAR had received substantial funding from organizations aligned with effective altruism and existential risk reduction:1

  • Coefficient Giving: Over $3.5 million
    • $1,035,000 grant in 2016 (two years) for operational improvements and research21
    • Multi-year institutional grant renewed in 2018 for 2018-201922
  • Survival and Flourishing Fund: Over $1.6 million
  • Effective Altruism Funds: Over $300,000
    • $150,000 from Long-Term Future Fund in April 2019 (addressing funding shortfall after 2018 controversy)11
    • Additional $150,000 recommended in August 201911

FTX Funding and Collapse

CFAR received substantial funding from FTX before the cryptocurrency exchange's collapse in November 2022. Between March and September 2022, $3,405,000 was transferred from the FTX Foundation to CFAR, with an additional $1,500,000 transferred on October 3, 2022, in ten separate transactions.23 Following FTX's collapse, FTX debtors required CFAR to return approximately $5 million.23

Financial Position

As of December 2019 or 2020, CFAR reported approximately $1.4 million in total liquidity, including $650,000 in cash, $575,000 in expected grants, and $215,000 in accounts receivable.24 The organization's total assets have been reported at $23,039,646, though this figure may include the venue property acquired in 2018.25

In 2018, CFAR's fundraising efforts (including the Coefficient Giving renewal) totaled over $2.5 million and enabled the organization to make a down payment on a permanent venue, reducing costs and expanding program capacity.22

Connection to AI Safety and Existential Risk

CFAR is categorized as an existential risk organization with explicit connections to existential risk from artificial intelligence.1 The organization originated from the rationality movement around LessWrong, which pioneered discussions of AI alignment and existential risks.1

While CFAR does not operate explicit AI alignment research programs, the organization's focus on improving individual and collective reasoning capacity is positioned as supporting the broader AI safety ecosystem. The rationality techniques taught at CFAR workshops—such as Bayesian reasoning, debiasing methods, and systematic decision-making frameworks—are presented as relevant to challenges in AI safety work, including addressing scope insensitivity and improving collaborative research practices.2627

Co-founder Andrew Critch has been involved in multiple AI safety organizations beyond CFAR, including the Center for Human-Compatible AI and FAR.AI, illustrating the overlap between CFAR's rationality-focused mission and AI safety work.1 The organization has conducted specialized workshops for MIRI researchers and other AI safety communities, though the scale and ongoing nature of these programs is unclear given CFAR's reduced operations.14

Controversies and Criticisms

Handling of Abuse Allegations (Brent Case)

In 2018-2019, CFAR faced significant criticism for its handling of allegations against a community member known as Brent (likely Brent Hildebrand). In January 2018, CFAR's semi-independent investigation panel, ACDC, received allegations of physical, sexual, and emotional abuse from one of Brent's former partners.28 Despite these allegations, ACDC recommended against banning the individual in April 2018, and CFAR leadership followed this recommendation.28

CFAR later publicly acknowledged serious failures in this case, stating they "had sufficient evidence long beforehand to notice that Brent might be harmful" and that their "failure to investigate this hypothesis explicitly was a mistake."28 The organization allowed Brent to attend and assist at several CFAR events, which "afforded him social legitimacy and caused significant harm in expectation."28 Staff members reported feeling manipulated by Brent, and CFAR acknowledged that as an organization "which exists to promote epistemic integrity," they should be held to "an especially high bar" on such matters.28

Following public criticism in September 2018, CFAR disbanded ACDC, determining that "the panelists were in over their heads."28 The organization issued a detailed public apology acknowledging that their safety procedures and investigation processes were inadequate and that the harm was preventable.28

The Brent controversy led to CFAR not holding a fundraiser in 2018, creating a funding shortfall that was later addressed by emergency grants from the Long-Term Future Fund in 2019.11

"Zizians" Incident and Associated Violence

On November 15, 2019, four individuals identifying as "Zizians" (a rationalist splinter group wearing Guy Fawkes masks) were arrested for barricading a CFAR wooded retreat event in Sonoma County.1 The protesters accused CFAR's leader of discriminating against trans women and failing to develop novel rationality techniques.1

According to Wikipedia, alleged Zizian members were later linked to serious violent crimes: an attempted murder in November 2022, and four murders in 2022 and 2025, including the killing of a U.S. Border Patrol officer in a shootout, with two of the alleged members dying violently.1 While these crimes occurred after the 2019 protest and the connection between the protesters and the later violent incidents is not fully detailed in available sources, the association has contributed to negative perceptions of elements within the broader rationality community.

Organizational Culture Concerns

In a 2025 interview with NBC News, CFAR president and co-founder Anna Salamon stated: "We didn't know at the time, but in hindsight we were creating conditions for a cult."23 Salamon characterized the organization's implicit messaging as suggesting that "human thinking is flawed and biased," with the exception that "ours [is unique]. We have a unique method of seeing things clearly"—representing what she described as an overestimation of their own competence.23

Critics have described CFAR workshops as "vaguely culty," blending cognitive science with elements reminiscent of self-help movements and featuring rhetoric about humans as fundamentally flawed beings needing systematic fixes.13 The high cost of workshops ($3,900 for 4 days) has also drawn criticism, with some comparing CFAR's claims about cognitive improvement to Lumosity's claims that resulted in an FTC fine for making unfounded assertions.13

Effectiveness and Institutional Incentives

Some rationality community members have criticized CFAR for limited progress in creating a "real art of rationality" that aids meaningful intellectual advancement, citing "model gaps," impure motives (such as prioritizing AI safety career placement and deference to organizations like MIRI over pure rationality development), and institutional incentives that may distort efforts.29 A perceived tension exists between epistemic rationality (exemplified by the LessWrong principle "Politics is the Mind-Killer") and real-world decision-making contexts where instructors may hold preconceptions about desirable outcomes.29

Broader Effective Altruism Funding Structure

CFAR's funding has been part of a larger effective altruism ecosystem characterized by centralized funding from major grantmakers like Coefficient Giving. According to a 2024 analysis, this structure created "strong incentives to align with funder worldviews to get money," turning what was originally a niche intellectual community into a career track with concentrated power among funders while providing "thin governance/guardrails."23 Bloomberg characterized the broader EA movement as featuring "a culture of free-flowing funding with minimal accountability focused on selective universities."23

Relationship to Rationality and EA Communities

CFAR occupies a central position in the rationality movement and has strong ties to the effective altruism community. The organization originated directly from the LessWrong community and broader rationality movement associated with Eliezer Yudkowsky and the blog Overcoming Bias.127

Community Infrastructure

CFAR has contributed to building social infrastructure within rationality and EA communities through workshops, alumni networks, and partnerships with organizations like Effective Altruism Global.30 The organization has served as a gathering point for "rationality geeks" to exchange ideas on cognitive improvement and has helped connect individuals interested in existential risk reduction, AI safety, and effective altruism.10

Community-led initiatives have emerged from CFAR's alumni network. The Applied Rationality Unconference series, organized by CFAR alumni, began with a first retreat in 2023, followed by a second in November 2024 that opened participation beyond CFAR alumni.31 Multiple additional events were organized throughout 2025, including the "Blackpool Applied Rationality Unconference," representing unofficial, participant-organized versions of CFAR's official workshops.31

Current Community Role

As of recent reports, CFAR appears to have reduced its role as a central organizing force within the rationality community, though "shards of the organisation still help with AIRCS workshops."32 According to LessWrong discussions, there is no longer "a concentration of force working towards a public accessible rationality curriculum."32 The organization's shift to part-time remote operations and multi-year hiatus from regular programming has reduced its direct community presence, though the resumption of workshops in 2025 may signal renewed engagement.67

Organizational Challenges and Failures

CFAR has publicly acknowledged several organizational failures and challenges throughout its history:

Early Scaling Difficulties

In 2012-2013, CFAR's early attempts at exponential growth of workshops failed due to small-organization logistics challenges.17 Expanding the instructor base risked what the organization termed "failure mode #1"—enthusiastic but inadequately trained instructors teaching poor-quality classes and potentially damaging rationality's reputation.17 Initial scaling efforts with 2011-era knowledge produced subpar lessons that required significant iteration and refinement.17

Workshop Design Issues

Early workshops (circa May 2012) were criticized as too short and intense, overwhelming participants' capacity to digest ideas.17 CFAR responded by developing informal Skype chat follow-ups, which were later formalized into a six-week official support program.17

Limited Domain-Specific Impact

As of 2012 and continuing forward, CFAR has faced challenges in scaling "heavy-duty rationality skills" into critical fields like medicine, education, government, or software development.17 With capacity for approximately 20 participants per month in workshop programs, the organization's direct reach has remained limited relative to the scale of impact in these sectors.17 Health care professionals have been notably absent from workshops despite the sector's potential need for improved decision-making frameworks.17

Effective Fundraising Project

In 2013, CFAR highlighted as an example of "wise failure" the "impressively failed" Effective Fundraising Project—a two-person nonprofit startup founded in 2013 to write grants for effective charities that shut down gracefully after six months.3334 While CFAR presented this as a case study in productive failure, it illustrates challenges in expanding the organization's impact model beyond direct workshop delivery.

Staff Transition and Organizational Form

The organization's transition from approximately 12 full-time staff to eight very-part-time instructors and curriculum developers represents a significant scaling back of operations.6 While CFAR has theorized that part-time work may help avoid organizational pitfalls like burnout or "going off the rails," this shift also reflects challenges in maintaining a sustainable full-time organizational model.6

Current Status and Recent Developments

As of September 2025, CFAR operates primarily as a remote organization with eight part-time curriculum developers and instructors, plus part-time staff for accounting and venue management.6 This represents a significant reduction from the organization's earlier staffing levels of approximately 12 full-time employees.6

Workshop Resumption

After a hiatus from regular programming beginning in early 2020, CFAR conducted several experimental programs in 2025:712

  • June 2025: Experimental mini-workshop at Arbor Summer Camp, teaching rationality material alongside other content. This pilot project expanded from a single instructor to multiple instructors and marked the first "workshop-like" content CFAR had presented in several years.12
  • November 2025: First mainline workshop since 2022, representing the beginning of a pilot program testing new workshop formats.7

According to CFAR's updates, the June 2025 mini-workshop energized staff to pursue further programming and provided valuable learning experiences for curriculum refinement.12

Leadership and Governance

Anna Salamon continues to serve as CFAR's president as of the most recent updates, with Jesse Liptrap and Michael Blume on the board of directors.1 The organization maintains its 501(c)(3) nonprofit status and continues to operate with a focus on curriculum development and testing techniques through workshops and online sessions.6

Hiring and Operations

CFAR has described itself as "low-key" in its hiring approach, being selective about adding new part-time team members while focusing on iterative refinement of rationality techniques.6 The organization runs seminars, visitor programs, and tests techniques on volunteers as part of its ongoing curriculum development work.30

Key Uncertainties

Several important questions about CFAR's effectiveness, impact, and future remain uncertain or inadequately documented:

  1. Causal impact on AI safety: While CFAR has documented case studies of alumni who work in AI safety, the causal contribution of CFAR training to their work is difficult to isolate from other factors like selection effects and broader community influences.

  2. Effectiveness of specific techniques: The organization's reported outcomes rely primarily on self-reported participant surveys without independent replication or rigorous experimental designs with control groups. The extent to which specific rationality techniques produce lasting behavioral and cognitive changes remains empirically undervalidated.

  3. Scalability limitations: CFAR's challenges in expanding beyond workshop-based delivery and reaching critical professional domains (medicine, policy, education) raise questions about whether the organization's model can achieve broad societal impact.

  4. Organizational sustainability: The transition to part-time remote operations and multi-year hiatus from regular programming may reflect either a strategic refinement of approach or ongoing challenges in maintaining a sustainable organizational model.

  5. Community influence measurement: While CFAR clearly influenced the rationality and EA communities, the magnitude and longevity of this influence—particularly after reduced operations—is difficult to quantify.

  6. Future trajectory: Whether the 2025 workshop resumption represents a sustained return to programming or a limited pilot effort remains to be seen, as does the organization's long-term strategy for curriculum development and community engagement.

Sources

Footnotes

  1. Citation rc-d4ee 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

  2. Citation rc-08b4

  3. Citation rc-d2e7

  4. Citation rc-6d9c 2 3

  5. Citation rc-3775

  6. Citation rc-b201 2 3 4 5 6 7 8 9 10

  7. Citation rc-2291 2 3 4 5

  8. Citation rc-12a8

  9. Citation rc-f4f0 2 3 4

  10. YouTube - CFAR OverviewYouTube - CFAR Overview 2 3

  11. Timeline of Center for Applied RationalityTimeline of Center for Applied Rationality 2 3 4 5 6

  12. Citation rc-e3e0 2 3 4

  13. Citation rc-c76a 2 3

  14. Citation rc-8fa8 2 3 4 5

  15. Effective Altruism - EA Global 2018 CFAR WorkshopEffective Altruism - EA Global 2018 CFAR Workshop

  16. Citation rc-277b

  17. Citation rc-0381 2 3 4 5 6 7 8 9

  18. Citation rc-4943

  19. Citation rc-0a57 2

  20. CFAR 2016 Case StudiesCFAR 2016 Case Studies

  21. Citation rc-759a

  22. Citation rc-4809 2

  23. Citation rc-69f9 2 3 4 5 6

  24. Citation rc-f6d7

  25. Citation rc-3ba5

  26. Citation rc-9e18

  27. Citation rc-b4d8 2

  28. Citation rc-e7bd 2 3 4 5 6 7

  29. Citation rc-5fa7 2

  30. Citation rc-03a0 2

  31. Citation rc-c61b 2

  32. Citation rc-88a5 2

  33. Citation rc-38be

  34. Citation rc-3b4b

References

This page provides access to the IRS Form 990 tax filings for the Center for Applied Rationality (CFAR), a nonprofit organization focused on teaching rationality skills to individuals working on important problems including AI safety. The 990 reports offer transparency into CFAR's financials, leadership, and organizational activities over time.

Claims (1)
As of December 2019 or 2020, CFAR reported approximately \$1.4 million in total liquidity, including \$650,000 in cash, \$575,000 in expected grants, and \$215,000 in accounts receivable. The organization's total assets have been reported at \$23,039,646, though this figure may include the venue property acquired in 2018.
Unsupported0%Feb 22, 2026
Center for Applied Rationality, based in Berkeley, CA, is a public charity with assets of $23,039,646.

The source does not contain any information about CFAR's liquidity, cash, expected grants, or accounts receivable.

2What is Going On With CFAR?LessWrong·niplav·2022

A LessWrong post examining the status and direction of the Center for Applied Rationality (CFAR), an organization focused on developing and teaching rationality skills. The post likely discusses CFAR's activities, funding, mission, and relationship to the broader AI safety and rationality communities.

★★★☆☆
Claims (1)
As of recent reports, CFAR appears to have reduced its role as a central organizing force within the rationality community, though "shards of the organisation still help with AIRCS workshops." According to LessWrong discussions, there is no longer "a concentration of force working towards a public accessible rationality curriculum." The organization's shift to part-time remote operations and multi-year hiatus from regular programming has reduced its direct community presence, though the resumption of workshops in 2025 may signal renewed engagement.
Accurate90%Feb 22, 2026
I think the conclusion I take from it is ~"There's a bunch of individual people who were involved with CFAR still doing interesting stuff, but there is no such public organisation anymore in a meaningful sense (although shards of the organisation still help with AIRCS workshops); so you have to follow these individual people to find out what they're up to. Also, there is no concentration of force working towards a public accessible rationality curriculum anymore."
3CFAR - An Impressive FailureCenter for Applied Rationality

CFAR highlights Effective Fundraising's failed grant-writing experiment as a model of rigorous experimental practice, praising their public pre-commitment to success criteria, advance contingency planning, and transparent resource tracking despite sending 25 applications and receiving zero grants. The piece argues that well-structured failures yield valuable information and deserve recognition alongside successes.

★★★☆☆
Claims (1)
In 2013, CFAR highlighted as an example of "wise failure" the "impressively failed" Effective Fundraising Project—a two-person nonprofit startup founded in 2013 to write grants for effective charities that shut down gracefully after six months. While CFAR presented this as a case study in productive failure, it illustrates challenges in expanding the organization's impact model beyond direct workshop delivery.
4Andrew Critch - CFARacritch.com

Andrew Critch describes his role in cofounding CFAR (2011-2012) with Anna Salamon, Julia Galef, and Michael Smith, a non-profit running workshops on rational decision-making grounded in cognitive science. The page outlines CFAR's mission to apply insights from psychology, behavioral economics, and Bayesian reasoning to improve individual and collective decision-making, with explicit connections to effective altruism and AI safety communities.

Claims (1)
The rationality techniques taught at CFAR workshops—such as Bayesian reasoning, debiasing methods, and systematic decision-making frameworks—are presented as relevant to challenges in AI safety work, including addressing scope insensitivity and improving collaborative research practices.
Minor issues85%Feb 22, 2026
At CFAR, we ask: Can we do more for the world by learning about cognitive biases like scope insensitivity that might thwart our attempts to make altruistic decisions? Can we get more use out of our gut instincts by learning what their strengths and weaknesses are? Can playing cooperative games with intuitive Bayesian reasoning improve our ability to assess arguments and reason collectively in groups?

The source mentions CFAR workshops teaching about cognitive biases like scope insensitivity, but does not explicitly list Bayesian reasoning, debiasing methods, and systematic decision-making frameworks as techniques taught at CFAR workshops. The source does not explicitly state that the techniques taught at CFAR workshops are relevant to challenges in AI safety work or improving collaborative research practices.

5CFAR - Grant from Open Philanthropy ProjectCenter for Applied Rationality

The Center for Applied Rationality (CFAR) received a $1,035,000 two-year grant from the Open Philanthropy Project to improve organizational infrastructure, fund scholarships for effective altruists attending workshops, and support SPARC and EuroSPARC programs. The grant reflects Open Philanthropy's interest in supporting rationality training as part of the broader AI safety and EA ecosystem.

★★★☆☆
Claims (1)
- \$1,035,000 grant in 2016 (two years) for operational improvements and research

A critical SFist commentary on the Center for Applied Rationality (CFAR), arguing that its $3,900 seminars blend pseudoscience, self-help tropes, and quasi-religious assumptions under the guise of rationality training. The piece questions CFAR's scientific credibility and notes its connections to AI safety and existential risk communities in the Bay Area tech world.

Claims (1)
CFAR's primary educational delivery mechanism has been immersive multi-day workshops, typically running 4.5 days and costing \$3,900 to \$4,000. These workshops emphasize three core pillars: applying critical thinking to real-world problems, having students practice skills rather than merely learn concepts, and building lasting habits.
Minor issues85%Feb 22, 2026
I say this because CFAR's $3,900 4-day seminars and their attendees are the subject of a New York Times magazine article this week that will leave you slapping your forehead in front of an imagined group of conference-goers.

The article states the seminars are 4 days, not 4.5 days. The article only mentions the price of the seminars as $3,900, not a range of $3,900 to $4,000. The article does not mention the three core pillars of the workshops.

Anna Salamon reflects on why CFAR failed to develop a genuinely effective art of rationality, arguing the core barrier is that it is easier and more locally reinforcing to simulate rationality teaching ('guessing the student's password') than to rigorously develop and test real techniques. She connects this failure mode to broader patterns in self-help and human potential movements, and how following these easier gradients corrupts one's capacity for clear reasoning.

★★★☆☆
Claims (1)
Some rationality community members have criticized CFAR for limited progress in creating a "real art of rationality" that aids meaningful intellectual advancement, citing "model gaps," impure motives (such as prioritizing AI safety career placement and deference to organizations like MIRI over pure rationality development), and institutional incentives that may distort efforts. A perceived tension exists between epistemic rationality (exemplified by the LessWrong principle "Politics is the Mind-Killer") and real-world decision-making contexts where instructors may hold preconceptions about desirable outcomes.
Accurate90%Feb 22, 2026
I suspect that "impure motives" (motives aimed at some local goal, and not simply at "help this mind be free and rational") were also a major contributor to what kept us from getting farther at CFAR, and that this interacted with and exacerbated the "model gaps" I was listing in hypothesis 1.

Duncan Sabien of the Center for Applied Rationality (CFAR) presents a workshop on practical rationality techniques from EA Global 2018 San Francisco. He focuses on the core question 'Do you know what you're doing and why?' and explores methods like managing personal autopilot and mimicking useful skills to improve decision-making and goal achievement.

★★★☆☆
Claims (1)
Key techniques taught at CFAR workshops include:
Unsupported0%Feb 22, 2026
We put on four and a half day rationality workshops, and also some targeted programs for groups like AI researchers, mathematicians, so on and so forth.

Cause IQ profile of the Center for Applied Rationality (CFAR), a Berkeley-based 501(c)(3) nonprofit founded in 2011 that develops rationality training workshops and hosts AI safety-adjacent research infrastructure including LessWrong and the Alignment Forum. CFAR conducts literature reviews in psychology and cognitive science to develop practical decision-making tools, and has expanded into research facilities and retreats for AI safety researchers.

Claims (1)
CFAR has contributed to building social infrastructure within rationality and EA communities through workshops, alumni networks, and partnerships with organizations like Effective Altruism Global. The organization has served as a gathering point for "rationality geeks" to exchange ideas on cognitive improvement and has helped connect individuals interested in existential risk reduction, AI safety, and effective altruism.
Unsupported20%Feb 22, 2026
Main workshops & research - cfar has performed literature reviews in psychology, cognitive science, and related fields in order to develop a range of mental techniques designed to help improve clarity of thinking and decision-making, and increase internal alignment towards goals.

The source does not mention CFAR's contribution to building social infrastructure within rationality and EA communities through workshops, alumni networks, and partnerships with organizations like Effective Altruism Global. The source does not mention CFAR serving as a gathering point for 'rationality geeks' to exchange ideas on cognitive improvement. The source does not mention CFAR helping connect individuals interested in existential risk reduction, AI safety, and effective altruism.

10CFAR - Mistakes Regarding BrentCenter for Applied Rationality

CFAR (Center for Applied Rationality) publicly details the institutional failures that allowed a community member ('Brent') who was later credibly accused of physical, sexual, and emotional abuse to participate in and assist at multiple CFAR events. The document outlines specific errors in judgment, failure to investigate warning signs, and the lack of formal safety policies, serving as an accountability statement and commitment to future improvement.

★★★☆☆
Claims (1)
In January 2018, CFAR's semi-independent investigation panel, ACDC, received allegations of physical, sexual, and emotional abuse from one of Brent's former partners. Despite these allegations, ACDC recommended against banning the individual in April 2018, and CFAR leadership followed this recommendation.
11LessWrong - CFAR A Year LaterLessWrong·Swimmer963 (Miranda Dixon-Luinenburg) ·2013·Blog post

A CFAR workshop alumna reflects one year after attending one of the organization's earliest workshops, offering an honest assessment of how much the training actually changed her behavior and thinking. She finds that while goal achievement was modest (4 of 13 fully completed), the conceptual frameworks gained had lasting value, particularly in professional contexts like nursing. The post highlights the fundamental difficulty of translating short-term intensive training into lasting behavioral change.

★★★☆☆
Claims (1)
- Alumni support: Six-week official follow-up programs and ongoing coaching/mentorship for workshop graduates.
Accurate100%Feb 22, 2026
CFAR now does official followups with participants for six weeks following the workshop.
12CFAR - About MissionCenter for Applied Rationality

CFAR's mission page articulates the organization's rationale for improving human reasoning: as human capabilities and tools grow more powerful, the stakes of cognitive errors rise correspondingly. CFAR aims to create a collaborative environment where researchers and practitioners can study, share, and test methods for improving thinking and decision-making to help humanity navigate increasingly high-stakes challenges.

★★★☆☆
Claims (2)
CFAR's stated mission is "Developing clear thinking for the sake of humanity's future," with particular emphasis on addressing challenges related to existential risks, including AI safety. The organization has trained over 1,300 workshop alumni through approximately 60 flagship workshops held between 2012 and early 2020, with participants reporting an average satisfaction rating of 9.2 out of 10. Beyond its core workshop programs, CFAR has contributed to the broader rationality and effective altruism communities through specialized programs, curriculum development, and connections to organizations working on AI alignment and existential risk reduction.
The organization has evolved significantly since its founding, transitioning from a full-time staff of approximately 12 to a mostly-remote operation with eight part-time curriculum developers and instructors as of September 2025. After a multi-year hiatus from regular programming, CFAR resumed mainline workshops in November 2025, testing new formats while continuing to refine its approach to teaching applied rationality.

A chronological timeline documenting the history and key events of the Center for Applied Rationality (CFAR), an organization that runs workshops and develops techniques for improving human rationality and decision-making. CFAR has been closely connected to the AI safety community, particularly through its influence on researchers and its historical ties to MIRI and EA communities.

Claims (1)
CFAR officially began offering classes in 2012, developing a flagship workshop model that charged \$3,900 for multi-day intensive programs. The organization held monthly workshops training diverse participants including scientists, police officers, teachers, entrepreneurs, and high school students. Workshop activity varied by year:

Ben Kuhn's 2013 review of a Center for Applied Rationality (CFAR) workshop, evaluating whether their rationality-training techniques produce genuine self-improvement or fall into 'derpy self-improvement' patterns. He critically examines multiple hypotheses about why testimonials are positive and reports modest but concrete personal improvements.

Claims (1)
According to CFAR, these techniques aim to bridge System 1 and System 2 cognition, helping participants work with emotions, reframe problems, and understand the modular nature of the mind.
Unsupported30%Feb 22, 2026
CFAR&rsquo;s workshops aim to give people more understanding and control of their own decision-making.

The source does not mention that the techniques aim to bridge System 1 and System 2 cognition, help participants work with emotions, reframe problems, and understand the modular nature of the mind.

A critical investigative essay examining the psychological and social harms within the rationalist community centered around MIRI and CFAR, documenting cases of psychosis, suicide, and cult-like dynamics. The piece explores how the community's extreme commitment to rationality and AI existential risk created environments harmful to mental health. It raises broader questions about whether the cultural pathologies of the rationalist community undermine its credibility on AI safety.

Claims (1)
Between March and September 2022, \$3,405,000 was transferred from the FTX Foundation to CFAR, with an additional \$1,500,000 transferred on October 3, 2022, in ten separate transactions. Following FTX's collapse, FTX debtors required CFAR to return approximately \$5 million.
Accurate100%Feb 22, 2026
CFAR, for example, was required by FTX debtors to return the approximately $5 million it received before the collapse of November 2022. $3,405,000 was transferred from FTX Foundation to CFAR between March and September 2022, and an additional $1,500,000 was transferred on October 3, 2022 , in ten separate transactions.

This EA Forum topic page aggregates information and community posts about the Center for Applied Rationality (CFAR), a Berkeley nonprofit founded in 2012 that teaches epistemic and instrumental rationality skills. CFAR has been substantially funded by Open Philanthropy and other EA-aligned funders, and has historically played a role in developing the rationalist and EA talent pipeline relevant to AI safety work.

★★★☆☆
Claims (1)
CFAR was founded in 2012 by Anna Salamon, Julia Galef, Michael Smith (also known as Valentine Smith), and Andrew Critch. The organization's origins trace to Anna Salamon's work at the Machine Intelligence Research Institute (MIRI) in 2011, where she began experimenting with teaching rationality techniques. CFAR initially functioned as an extension of MIRI before becoming an independent nonprofit organization.
Minor issues85%Feb 22, 2026
CFAR was founded in 2012 by Anna Salamon, Julia Galef , Valentine Smith and Andrew Critch.

The source does not mention that CFAR initially functioned as an extension of MIRI before becoming an independent nonprofit organization. The source does not mention Anna Salamon's work at the Machine Intelligence Research Institute (MIRI) in 2011, where she began experimenting with teaching rationality techniques. The source refers to Michael Smith as Valentine Smith.

An overview presentation of the Center for Applied Rationality (CFAR), an organization focused on teaching rationality skills and cognitive tools to help people think more clearly and make better decisions. CFAR is closely connected to the AI safety community, as improving human reasoning is seen as relevant to addressing existential risks including those from advanced AI.

★★☆☆☆
Claims (1)
CFAR officially began offering classes in 2012, developing a flagship workshop model that charged \$3,900 for multi-day intensive programs. The organization held monthly workshops training diverse participants including scientists, police officers, teachers, entrepreneurs, and high school students. Workshop activity varied by year:
18Center for Applied Rationality (CFAR) Press KitCenter for Applied Rationality

A press kit from the Center for Applied Rationality (CFAR), an organization focused on teaching rationality and cognitive skills to help people think more clearly and make better decisions. CFAR has historically been closely connected to the AI safety community, training many researchers and practitioners who work on existential risk and alignment. The kit provides background on CFAR's mission, programs, and organizational identity.

★★★☆☆
Claims (1)
The organization emerged from the rationality movement around Eliezer Yudkowsky's LessWrong community and focuses on applying insights from cognitive science, behavioral economics, psychology, and decision theory to real-world decision-making and problem-solving.
19CFAR Financial OverviewCenter for Applied Rationality

A financial transparency report from the Center for Applied Rationality (CFAR) for 2019, detailing the organization's revenue, expenses, and financial health. CFAR is a nonprofit focused on developing and teaching rationality skills, with indirect relevance to AI safety through its work training researchers and community members.

★★★☆☆
Claims (1)
As of December 2019 or 2020, CFAR reported approximately \$1.4 million in total liquidity, including \$650,000 in cash, \$575,000 in expected grants, and \$215,000 in accounts receivable. The organization's total assets have been reported at \$23,039,646, though this figure may include the venue property acquired in 2018.
20CFAR Official WebsiteCenter for Applied Rationality

The Center for Applied Rationality (CFAR) is a nonprofit organization focused on developing and teaching practical rationality skills to help people think more clearly and make better decisions. CFAR runs workshops and training programs aimed at improving cognitive tools and reasoning abilities. The organization has historically had significant overlap with the AI safety community, training many researchers and advocates.

★★★☆☆
Claims (1)
CFAR's stated mission is "Developing clear thinking for the sake of humanity's future," with particular emphasis on addressing challenges related to existential risks, including AI safety. The organization has trained over 1,300 workshop alumni through approximately 60 flagship workshops held between 2012 and early 2020, with participants reporting an average satisfaction rating of 9.2 out of 10. Beyond its core workshop programs, CFAR has contributed to the broader rationality and effective altruism communities through specialized programs, curriculum development, and connections to organizations working on AI alignment and existential risk reduction.
21CFAR’s new focus, and AI SafetyLessWrong·AnnaSalamon·2016

CFAR announced a strategic pivot to focus on AI safety and existential risk reduction, arguing that progress is bottlenecked by collective epistemology rather than awareness. The organization aims to improve individual reasoning and collaborative thinking among AI safety researchers, effective altruists, and rationalists, believing this offers the highest leverage for improving humanity's survival odds.

★★★☆☆
Claims (1)
The rationality techniques taught at CFAR workshops—such as Bayesian reasoning, debiasing methods, and systematic decision-making frameworks—are presented as relevant to challenges in AI safety work, including addressing scope insensitivity and improving collaborative research practices.
Unsupported20%Feb 22, 2026
Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together . And to do this among the relatively small sets of people tackling existential risk.

The source mentions CFAR's focus on AI safety and improving thinking skills, but it does not explicitly list the specific rationality techniques taught at CFAR workshops (Bayesian reasoning, debiasing methods, systematic decision-making frameworks) or their relevance to specific challenges in AI safety work (addressing scope insensitivity, improving collaborative research practices).

Wikipedia overview of the Center for Applied Rationality (CFAR), a nonprofit organization focused on developing and teaching rationality techniques to help people think more clearly and make better decisions. CFAR has been particularly influential in the AI safety community by training researchers and advocates in cognitive tools and epistemic practices. The organization has historically served as a pipeline connecting rationalist community members to AI safety work.

★★★☆☆
Claims (1)
The organization emerged from the rationality movement around Eliezer Yudkowsky's LessWrong community and focuses on applying insights from cognitive science, behavioral economics, psychology, and decision theory to real-world decision-making and problem-solving.
23CFAR Updates ArchiveCenter for Applied Rationality

An archive of updates and newsletters from the Center for Applied Rationality (CFAR), documenting the organization's activities, research, and developments in rationality training over time. CFAR focuses on teaching cognitive tools and decision-making skills relevant to addressing important problems, including AI safety. This archive provides a historical record of CFAR's evolving work and community engagement.

★★★☆☆
Claims (1)
In 2013, CFAR highlighted as an example of "wise failure" the "impressively failed" Effective Fundraising Project—a two-person nonprofit startup founded in 2013 to write grants for effective charities that shut down gracefully after six months. While CFAR presented this as a case study in productive failure, it illustrates challenges in expanding the organization's impact model beyond direct workshop delivery.

A VICE media article covering the Center for Applied Rationality (CFAR), an organization focused on teaching rationality and critical thinking skills, often associated with the AI safety and effective altruism communities. The article likely examines CFAR's methods, culture, or influence on the broader rationalist and AI safety ecosystem.

Claims (3)
CFAR's stated mission is "Developing clear thinking for the sake of humanity's future," with particular emphasis on addressing challenges related to existential risks, including AI safety. The organization has trained over 1,300 workshop alumni through approximately 60 flagship workshops held between 2012 and early 2020, with participants reporting an average satisfaction rating of 9.2 out of 10. Beyond its core workshop programs, CFAR has contributed to the broader rationality and effective altruism communities through specialized programs, curriculum development, and connections to organizations working on AI alignment and existential risk reduction.
Inaccurate50%Feb 22, 2026
CFAR&#8217;s founders, Anna Salamon, Julia Galef, Michael Smith, and Andrew Critch all have impressive backgrounds in math, artificial intelligence, science, or a combination. In 2011, Salamon, CFAR&#8217;s earliest founder, was working at the Machine Intelligence Research Institute (MIRI) an artificial research firm that now shares its offices with CFAR in Berkeley. CFAR originally began as an extension of MIRI, she explained in an email. &#8220;I was doing training and onboarding for the Machine Intelligence Research Institute, which in practice required a lot of rationality training. And I began to feel that developing exercises for training &#8216;rationality&#8217;—the ability to form accurate beliefs in confusing contexts, and to achieve one&#8217;s goals—was incredibly important, and worth developing in its own right,&#8221; Salamon wrote.

unsupported: The source does not mention CFAR's stated mission. unsupported: The source does not mention the number of workshop alumni trained by CFAR. unsupported: The source does not mention the number of flagship workshops held by CFAR between 2012 and early 2020. unsupported: The source does not mention the average satisfaction rating of participants. unsupported: The source does not mention CFAR's contributions to the broader rationality and effective altruism communities through specialized programs, curriculum development, and connections to organizations working on AI alignment and existential risk reduction.

The organization's 2017 Impact Report indicated that alumni reported higher impact through better motivation navigation and clearer thinking. However, these results are primarily based on internal surveys without independent replication or external validation.
Unsupported0%Feb 22, 2026
So far, a fair amount of the participants who have experienced CFAR&#8217;s teachings firsthand see positive results in their lives, even a year later, at least according to survey data that CFAR has internally collected.

The source does not mention a 2017 Impact Report or any specific findings related to alumni motivation, navigation, or clearer thinking. The source does not explicitly state that the results are based on internal surveys without independent replication or external validation.

CFAR was founded in 2012 by Anna Salamon, Julia Galef, Michael Smith (also known as Valentine Smith), and Andrew Critch. The organization's origins trace to Anna Salamon's work at the Machine Intelligence Research Institute (MIRI) in 2011, where she began experimenting with teaching rationality techniques. CFAR initially functioned as an extension of MIRI before becoming an independent nonprofit organization.
Accurate100%Feb 22, 2026
CFAR&#8217;s founders, Anna Salamon, Julia Galef, Michael Smith, and Andrew Critch all have impressive backgrounds in math, artificial intelligence, science, or a combination. In 2011, Salamon, CFAR&#8217;s earliest founder, was working at the Machine Intelligence Research Institute (MIRI) an artificial research firm that now shares its offices with CFAR in Berkeley. CFAR originally began as an extension of MIRI, she explained in an email.
25CFAR - June 2025 Experimental MiniworkshopCenter for Applied Rationality

An announcement or update from the Center for Applied Rationality (CFAR) regarding an experimental miniworkshop scheduled for June 2025. CFAR workshops focus on applied rationality techniques, cognitive tools, and decision-making skills relevant to individuals working on high-stakes problems including AI safety.

★★★☆☆
Claims (1)
After a hiatus from regular programming beginning in early 2020, CFAR conducted an experimental mini-workshop in June 2025 at Arbor Summer Camp and resumed mainline workshops in November 2025, marking the beginning of a pilot program testing new workshop formats.

Announcement for a 4-day applied rationality retreat at CEEALAR in Blackpool, UK in April 2025, featuring CFAR-style workshops and participant-organized activities for 15 attendees at £120 including accommodation and meals. The event builds on previous successful iterations and is aimed at individuals seeking to apply rationality techniques to personal challenges.

★★★☆☆
Claims (1)
The Applied Rationality Unconference series, organized by CFAR alumni, began with a first retreat in 2023, followed by a second in November 2024 that opened participation beyond CFAR alumni. Multiple additional events were organized throughout 2025, including the "Blackpool Applied Rationality Unconference," representing unofficial, participant-organized versions of CFAR's official workshops.
Accurate100%Feb 22, 2026
The Applied Rationality Unconference started in 2023 as a small unconference-style weekend retreat for CFAR (Center for Applied Rationality) alumni. We had fun, talked a lot about CFAR-style applied rationality and helped each other make genuine progress on some of the most important bottlenecks in our lives. 100% of attendees rated the experience 8/10 or higher, and it was one of my favourite weekends of that year. In November 2024 we ran a second retreat, this time opening it up to non-CFAR alumni. Attendees were excited enough that they decided to organise several more of these events throughout 2025!
27CFAR 2016 Case StudiesCenter for Applied Rationality

A collection of case studies from the Center for Applied Rationality (CFAR) documenting how participants applied rationality training techniques to real-world problems in 2016. The studies illustrate practical outcomes of CFAR's workshop curriculum, showing how improved reasoning and decision-making skills affect participants' personal and professional lives. This resource serves as qualitative evidence for the effectiveness of rationality training in the AI safety and effective altruism communities.

★★★☆☆
Claims (2)
CFAR develops and refines rationality techniques by synthesizing insights from cognitive science, psychology, neuroscience, behavioral economics, mathematics, statistics, and game theory. The organization conducts empirical testing of techniques and invents new methods when academic research proves insufficient for practical application.
Notable examples include:
28CFAR Resources UpdatesCenter for Applied Rationality

This page tracks updates to resources provided by the Center for Applied Rationality (CFAR), an organization focused on developing and teaching practical rationality skills. CFAR's work emphasizes improving human reasoning and decision-making, which has relevance to the AI safety community through better epistemic practices and community training.

★★★☆☆
Claims (1)
The organization has evolved significantly since its founding, transitioning from a full-time staff of approximately 12 to a mostly-remote operation with eight part-time curriculum developers and instructors as of September 2025. After a multi-year hiatus from regular programming, CFAR resumed mainline workshops in November 2025, testing new formats while continuing to refine its approach to teaching applied rationality.
29CFAR 2017 Impact ReportCenter for Applied Rationality

This is the Center for Applied Rationality's 2017 annual impact report, documenting CFAR's activities, outcomes, and mission progress over the year. CFAR focuses on developing rationality training to help individuals—particularly those working on existential risk and AI safety—make better decisions and reason more effectively. The report likely covers workshop attendance, curriculum updates, and evidence of participant impact in high-stakes domains.

★★★☆☆
Claims (1)
The organization's 2017 Impact Report indicated that alumni reported higher impact through better motivation navigation and clearer thinking. However, these results are primarily based on internal surveys without independent replication or external validation.
30CFAR - Fundraising and Leadership UpdatesCenter for Applied Rationality

An organizational update from the Center for Applied Rationality (CFAR) covering fundraising progress and leadership changes in 2018. It communicates CFAR's institutional status and strategic direction to donors and community members. This type of update reflects CFAR's role in the broader AI safety and rationality ecosystem.

★★★☆☆
Claims (1)
- Multi-year institutional grant renewed in 2018 for 2018-2019
Citation verification: 6 verified, 1 flagged, 18 unchecked of 34 total

Structured Data

7 facts·7 recordsView in FactBase →
Founded Date
2012

Key People

5
TT
Timothy Telleen-Lawton
Former Executive Director · 2018–present
AC
Andrew CritchFounder
Former Curriculum Developer & Co-founder · 2012–present
MS
Michael SmithFounder
Senior Instructor & Co-founder · 2012–present
AS
Anna SalamonFounder
President & Chair of Board · 2012–present
JG
Julia GalefFounder
President & Co-founder · 2012–2016

All Facts

7
Organization
PropertyValueAs OfSource
Legal Structure501(c)(3) organization
HeadquartersBerkeley
CountryUnited States
Founded Date2012
People
PropertyValueAs OfSource
Founder (text)Julia Galef
Biographical
PropertyValueAs OfSource
Wikipediahttps://en.wikipedia.org/wiki/Center_for_Applied_Rationality
General
PropertyValueAs OfSource
Websitehttp://rationality.org/

Board Seats

2
MemberAppointedRole
Zvi MowshowitzBoard Director
iaU7x4cgzSCurriculum Developer & Instructor; Board Director

Related Wiki Pages

Top Related Pages

Analysis

Timelines WikiAI Watch

Other

Nuño SempereJaan TallinnGwern BranwenVidur KapurAndrew CritchNate Soares

Organizations

Survival and Flourishing FundCoefficient GivingMachine Intelligence Research InstituteLighthaven (Event Venue)Center for Human-Compatible AIWilliam and Flora Hewlett Foundation

Concepts

Community Building OverviewDiagram Naming ResearchEa Longtermist Wins Losses