CFAR - About Mission
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Center for Applied Rationality
CFAR is a nonprofit closely connected to the AI safety and rationalist communities; its mission emphasizes improving human reasoning as a prerequisite for navigating high-stakes technological risks, making it contextually relevant to AI safety culture and researcher development.
Metadata
Summary
CFAR's mission page articulates the organization's rationale for improving human reasoning: as human capabilities and tools grow more powerful, the stakes of cognitive errors rise correspondingly. CFAR aims to create a collaborative environment where researchers and practitioners can study, share, and test methods for improving thinking and decision-making to help humanity navigate increasingly high-stakes challenges.
Key Points
- •Human intelligence drives both civilization's greatest achievements and its most dangerous risks, making cognitive improvement increasingly high-stakes.
- •Known cognitive biases, motivated reasoning, blindspots, and failures of follow-through persist even when people are aware of them.
- •CFAR functions as a community and research venue for 'rationality geeks' to exchange and empirically test methods for improving reasoning.
- •The organization runs workshops and curriculum development programs, founded in 2012 by Anna Salamon, Julia Galef, Valentine Smith, and Andrew Critch.
- •Improving human reasoning is framed as directly relevant to existential risks including bioweapons, AI, and climate change.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Center for Applied Rationality | Organization | 62.0 |
Cached Content Preview
Nearly every major “victory” that humanity has experienced over the past ten thousand years—from agriculture to the eradication of smallpox—has come from human intelligence.
Nearly every major problem that humanity will face over the next century—from bioengineered weapons to AI to climate change—will _also_ be the result of human intelligence.
As our species builds up a greater foundation of knowledge and develops ever-more-powerful tools, the stakes are only going up. Compare the best and the worst that a single human could accomplish with the resources available in the year 1018 to the best and the worst that a single human could accomplish in 1918, or to the best and the worst that could be accomplished in 2008. Over the coming decades, our ability to make the world a better place is going to rise meteorically—along with our ability to make disastrous mistakes.
And yet, human intelligence itself remains demonstrably imperfect and largely mysterious. We suffer from biases that still influence us even after we know they’re there. We make mistakes that we’ve made a dozen times before. We jump to conclusions, make overconfident predictions, develop giant blindspots around ego and identity and social pressure, fail to follow through on our goals, turn opportunities for collaboration into antagonistic zero-sum games—and those are just the mistakes we _notice_.
Sometimes we manage to catch these mistakes before they happen—how? Some people manage to reliably avoid some of these failure modes—how? Where does good thinking come from? Good research? Good debate? Innovation? Attention to detail? Motivation? How does one draw the appropriate balance between skepticism and credulity, or deliberation and execution, or self-discipline and self-sympathy? How does one balance happiness against productivity, or the exploitation of known good strategies against the need to explore and find the next big breakthrough? What are the blindspots that cause humans—even unusually moral and capable ones—to overlook giant, glaring problems, or to respond inappropriately or ineffectively to those problems, once recognized?
CFAR aims to create an arena where rationality geeks can come trade notes about how our minds work, and about ways we’re intentionally messing with how our minds work in a manner that seems to be making things better. We think minds, like most things, can sometimes be made better by thinking about it, trading ideas, playing around, etc, in a context of fun, freedom, and accurate observation of what effects seem to be following from what experiments.
## Nuts & Bolts
CFAR is a 501(c)(3) non-profit operating mostly remotely, originally founded in 2012 by Anna Salamon, Julia Galef, Valentine Smith, and Andrew Critch.
Our workshop programs are currently run by a very lean staff including around eight part time curriculum developers. We are low-key looking for additional very-part-time rationality developers, but are being extremely picky. Most of our
... (truncated, 3 KB total)60e26d60f5b62f70 | Stable ID: NjhhM2NlYT