Skip to content
Longterm Wiki
Back

Author

AnnaSalamon

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

This 2016 announcement marked CFAR's strategic pivot toward AI safety, reflecting the broader rationalist community's growing concern about existential risk and the belief that better collective reasoning is a key bottleneck for the field.

Forum Post Details

Karma
51
Comments
88
Forum
lesswrong
Forum Tags
Center for Applied Rationality (CFAR)Project Announcement

Metadata

Importance: 45/100blog postprimary source

Summary

CFAR announced a strategic pivot to focus on AI safety and existential risk reduction, arguing that progress is bottlenecked by collective epistemology rather than awareness. The organization aims to improve individual reasoning and collaborative thinking among AI safety researchers, effective altruists, and rationalists, believing this offers the highest leverage for improving humanity's survival odds.

Key Points

  • CFAR reoriented its mission toward AI safety and existential risk, viewing rationality training as a high-leverage intervention for the field.
  • Progress on existential risk is seen as bottlenecked by collective epistemology—how well people reason together—rather than by lack of awareness or advocacy.
  • CFAR aims to serve AI safety researchers, effective altruists, and rationality-focused individuals by improving their individual and collaborative thinking skills.
  • The organization explicitly chose not to signal-boost existing AI safety models, instead focusing on helping people reason more rigorously about the problems.
  • This represents an indirect approach to AI safety: improving the cognitive infrastructure of the people working on it rather than directly producing technical research.

Cited by 1 page

PageTypeQuality
Center for Applied RationalityOrganization62.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202652 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. CFAR’s new focus, and AI Safety — LessWrong Center for Applied Rationality (CFAR) Project Announcement Frontpage 51

 CFAR’s new focus, and AI Safety 

 by AnnaSalamon 3rd Dec 2016 4 min read 88 51

 A bit about our last few months:

 
 We’ve been working on getting a simple clear mission and an organization that actually works.  We think of our goal as analogous to the transition that the old Singularity Institute underwent under Lukeprog (during which chaos was replaced by a simple, intelligible structure that made it easier to turn effort into forward motion).

 As part of that, we’ll need to find a way to be intelligible.

 This is the first of several blog posts aimed at causing our new form to be visible from outside.  (If you're in the Bay Area, you can also come meet us at tonight's open house .) (We'll be talking more about the causes of this mission-change; the extent to which it is in fact a change, etc. in an upcoming post.)

 
 
 
 Here's a short explanation of our new mission: 
 
 
 
 We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.

 
 Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other " spreadable viewpoint "  disconnected   from its  derivation .

 
 Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together .  And to do this among the relatively small sets of people tackling existential risk. 

 
 
 
 
 To elaborate a little: 
 
 
 
 Existential wins and AI safety

 By an “existential win”, we mean humanity creates a stable, positive future.  We care a heck of a lot about this one. 
 
 
 Our working model here accords roughly with the model in Nick Bostrom’s book Superintelligence .  In particular, we believe that if general artificial intelligence is at some point invented, it will be an enormously big deal . 
 
 
 (Lately, AI Safety is being discussed by everyone from The Economist to Newsweek to Obama to an open letter from eight thousand .  But we’ve been thinking on this, and backchaining partly from it, since before that.) 
 
 
 
 Who we’re focusing on, why

 Our preliminary investigations agree with The Onion’s ; despite some looking, we have found no ultra-competent group of people behind the scenes who have fully got things covered. 
 
 
 What we have found are: 
 
 
 AI and machine learning graduate students, researchers, project-managers, etc. who care; who can think; and who are interested in thinking better;

 Students and others affiliated with the “ Effective Altruism ” movement, who are looking to direct their careers in ways that can do the most good;

 Rationality geeks, 

... (truncated, 52 KB total)
Resource ID: bb93f09b90d6582c | Stable ID: NGVjNTUxMz