Factor Diagram Naming: Research Report
Executive Summary
| Finding | Key Insight | Recommendation |
|---|---|---|
| "Cause-effect" is misleading | Our diagrams aren't strictly causal—they include uncertainties, considerations, and influence relationships | Rename to something more accurate |
| Multiple established frameworks exist | Influence diagrams, driver diagrams, argument maps, crux maps all map similar territory | Borrow terminology strategically |
| Uncertainty types matter | Aleatoric (inherent randomness) vs epistemic (knowledge gaps) distinction is well-established | Consider incorporating uncertainty typology |
| "Crux" has traction in target audience | Rationalist/EA community already uses "crux" terminology from CFAROrganizationCenter for Applied RationalityBerkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-y...Quality: 62/100 | Strong candidate for naming |
| Recommended name | "Crux Map" or "Factor Map" | Balances clarity, accuracy, and audience fit |
Background
Our current "cause-effect diagrams" visualize factors that influence outcomes, including uncertainties, considerations, and decision-relevant relationships. The name "cause-effect" implies strict causality that doesn't match what we're actually modeling.
This report surveys similar conceptual frameworks across multiple fields to identify appropriate terminology for LongtermWiki's factor visualization system. We examined:
- Decision analysis: Influence diagrams, decision trees, tornado diagrams
- Systems thinking: Causal loop diagrams, stock-flow models
- Philosophy/Argumentation: Argument maps, IBIS, crucial considerations
- Rationalist community: Crux mapping, double crux
- Quality improvement: Driver diagrams, theory of change
- Knowledge representation: Concept maps, knowledge graphs
Framework Comparison
1. Influence Diagrams (Decision Analysis)
Origin: Introduced by Howard and Matheson in 1981, building on Howard Raiffa's decision analysis work.
Definition: A graph representing a Bayesian decision problem with three node types:
- Circles: Random variables (uncertainties)
- Rectangles: Decision variables (choices)
- Diamonds: Utility/value variables (outcomes)
| Strength | Weakness |
|---|---|
| Distinguishes uncertainty from decisions | Technical/academic connotation |
| Well-established in decision theory | Implies quantitative Bayesian framework |
| Supports automatic model generation | Less familiar outside specialized fields |
Influence diagrams share our goal of mapping what affects outcomes, but they're specifically designed for quantitative decision analysis with probability distributions. Our diagrams are more qualitative and conceptual.
Sources: Decision Analysis Wikipedia, INFORMS Decision Analysis
2. Causal Loop Diagrams (Systems Thinking)
Origin: Emerged from System Dynamics practice, popularized by Jay Forrester and the MIT school.
Definition: A diagram showing causal relationships between variables, with emphasis on feedback loops:
- Reinforcing loops (R): Change compounds (e.g., compound interest)
- Balancing loops (B): Change counteracts (e.g., thermostat)
- Polarity (+/-): Whether variables move together or opposite
| Strength | Weakness |
|---|---|
| Handles feedback loops natively | Our diagrams are DAGs (no cycles) |
| Clear visual conventions | "Causal" in the name has same problem |
| Popular in organizational contexts | Emphasizes dynamics over static factors |
CLDs are designed for systems with feedback loops. Our current implementation uses Dagre (DAG layout), which doesn't support cycles. If we wanted to support feedback, we'd need different rendering.
Sources: Causal Loop Diagram Wikipedia, Systems Thinker, Cascade Institute Handbook
3. Argument Maps & IBIS (Argumentation)
Origin: IBIS (Issue-Based Information System) invented by Werner Kunz and Horst Rittel in the 1960s for tackling "wicked problems."
Core Elements:
- Issues: Questions to be addressed
- Positions: Proposed answers/solutions
- Arguments: Pro/con rationales supporting or opposing positions
| Strength | Weakness |
|---|---|
| Designed for complex, ill-defined problems | Focused on argumentation, not factors |
| Captures disagreement structure | Three-type ontology is restrictive |
| Software tools exist (Compendium) | Less familiar than other terms |
IBIS's focus on structuring discourse around "wicked problems" resonates with LongtermWiki's goal. The emphasis on capturing positions and arguments for/against is similar to how we model different views on factors.
Tools: Compendium, DRed, designVUE
Sources: IBIS Wikipedia, Kunz & Rittel 1970, Cognexus
4. Crucial Considerations (Philosophy)
Origin: Concept introduced by Nick Bostrom in 2007, elaborated in a 2014 talk.
Definition: "A consideration such that if it were taken into account, it would overturn the conclusions we would otherwise reach about how we should direct our efforts."
Key concepts:
- Crucial consideration: Overturns conclusions
- Crucial consideration component: Becomes crucial when combined with other considerations
- Deliberation ladder: Sequence of successive reassessments
| Strength | Weakness |
|---|---|
| Directly relevant to LongtermWiki's mission | "Consideration" is abstract |
| Familiar to EA/rationalist audience | More about ideas than structure |
| Captures the "changes your mind" aspect | Not specifically a diagram type |
"If we have overlooked even just one such consideration, then all our best efforts might be for naught – or less. When headed the wrong way, the last thing needed is progress." — Nick BostromPersonNick BostromComprehensive biographical profile of Nick Bostrom covering his founding of FHI, the landmark 2014 book 'Superintelligence' that popularized AI existential risk, and key philosophical contributions...Quality: 25/100
Sources: Bostrom's website, EA Forum, Effective Altruism article
5. Crux Mapping (Rationalist Community)
Origin: Developed by CFAR (Center for Applied Rationality) as part of their curriculum.
Definitions:
- Crux: A belief C that, if changed, would change your belief in B
- Double crux: A shared crux where two disagreeing people would both change their minds
Victory condition: "Agreement on a shared causal model of the world—you've won when both you and your partner agree to the same if-then statements."
| Strength | Weakness |
|---|---|
| Familiar to target audience | Jargon outside rationalist circles |
| Emphasizes "what would change your mind" | Originally about dyadic disagreement |
| "Crux" is memorable and specific | Some critique it as overly idealized |
The crux concept directly aligns with LongtermWiki's goal of mapping key uncertainties that drive prioritization. "What would change your mind about this intervention?" is exactly what we're trying to capture.
Sources: LessWrong Double Crux, CFAR Resources, Basic Double Crux Pattern
6. Driver Diagrams (Quality Improvement)
Origin: Widely used in healthcare quality improvement, particularly with Plan-Do-Study-Act (PDSA) cycles.
Structure:
- Aim: What you're trying to achieve
- Primary drivers: High-level factors that must be influenced
- Secondary drivers: Specific factors/interventions
- Change ideas: Concrete actions
| Strength | Weakness |
|---|---|
| Clear hierarchical structure | Associated with healthcare/QI |
| "Driver" is intuitive | Implies more agency than we model |
| Links factors to interventions | May feel too operational |
Sources: IHI Driver Diagram, AHRQ Key Driver Diagram
7. Theory of Change / Logic Models (Nonprofit)
Origin: Common in nonprofit and philanthropic contexts.
Key distinction:
- Theory of Change: Explains why change will occur, works backward from impact
- Logic Model: Describes what will happen, works forward from inputs
| Strength | Weakness |
|---|---|
| Familiar to funders/nonprofits | Associated with program evaluation |
| Emphasizes causal pathways | Less about uncertainty |
| Links to impact measurement | May feel too outcome-focused |
Sources: NPC Theory of Change, La Piana comparison, TOC Toolkit
8. Concept Maps vs Knowledge Graphs
| Feature | Mind Map | Concept Map | Knowledge Graph |
|---|---|---|---|
| Structure | Tree (single center) | Network (multiple hubs) | Graph/Network |
| Relationships | Implicit | Labeled connections | Typed relationships |
| Origin | Tony Buzan (1974) | Joseph Novak (1970s) | Computer science |
Our diagrams are most similar to concept maps: network structure with labeled relationships between nodes.
Sources: NN/g Cognitive Maps, Gloow comparison, Concept Map Wikipedia
Uncertainty Types
Our diagrams include nodes with varying confidence levels. Understanding established uncertainty typologies helps us communicate what kinds of uncertainty we're representing.
Aleatoric vs Epistemic Uncertainty
| Type | Definition | Can be reduced? | Example |
|---|---|---|---|
| Aleatoric | Inherent randomness in the system | No—only quantified | Coin flip outcome |
| Epistemic | Uncertainty due to lack of knowledge | Yes—with more information | "Will this policy pass?" |
Key insight: "Epistemic uncertainty tends to be expressed using phrases like 'I am 90% sure' whereas aleatory uncertainty tends to be expressed using phrases like 'I think there's a 90% chance.'"
Our diagrams mostly represent epistemic uncertainty—things we don't know but could learn—rather than inherent randomness.
Sources: Springer Machine Learning, Berkeley Statistics, Towards Data Science
Naming Options Analysis
Based on this research, here are the main naming candidates:
Option 1: Crux Map
| Dimension | Assessment |
|---|---|
| Accuracy | High—captures "what would change your mind" |
| Familiarity | High for EA/rationalist audience, low outside |
| Distinctiveness | High—not easily confused with other diagram types |
| Fit with LongtermWiki | Excellent—aligns with Crux Graph component |
"Crux Map" directly aligns with LongtermWiki's mission (mapping key uncertainties) and the target audience's vocabulary. The "crux" terminology is already established in the rationalist/EA community.
Option 2: Factor Map
| Dimension | Assessment |
|---|---|
| Accuracy | High—we are mapping factors |
| Familiarity | Medium—generic but clear |
| Distinctiveness | Medium—could be confused with factor analysis |
| Fit with LongtermWiki | Good—neutral, accessible |
Option 3: Influence Map
| Dimension | Assessment |
|---|---|
| Accuracy | High—captures influence relationships |
| Familiarity | Medium—some association with decision analysis |
| Distinctiveness | Medium—could be confused with influence diagrams |
| Fit with LongtermWiki | Good—clear meaning |
Option 4: Consideration Map
| Dimension | Assessment |
|---|---|
| Accuracy | High—we're mapping considerations |
| Familiarity | Low—not established terminology |
| Distinctiveness | High—distinctive |
| Fit with LongtermWiki | Good—connects to "crucial considerations" |
Option 5: Driver Map/Diagram
| Dimension | Assessment |
|---|---|
| Accuracy | Medium—implies more agency |
| Familiarity | Medium—established in QI |
| Distinctiveness | Medium—associated with healthcare |
| Fit with LongtermWiki | Moderate—less about uncertainty |
Option 6: Priority Map
| Dimension | Assessment |
|---|---|
| Accuracy | Medium—we map factors, not just priorities |
| Familiarity | Low—not established |
| Distinctiveness | Medium |
| Fit with LongtermWiki | Moderate—captures decision-relevance |
Option 7: Leverage Map
| Dimension | Assessment |
|---|---|
| Accuracy | Medium—emphasizes tractability |
| Familiarity | Low—not established |
| Distinctiveness | High |
| Fit with LongtermWiki | Moderate—overemphasizes changeability |
Recommendations
Primary Recommendation: Crux Map
Rationale:
- Directly aligns with LongtermWiki's mission ("Crux Graph" is already a core component)
- Familiar to the target audience (EA/rationalist community)
- Captures the key insight: "what would change your mind?"
- Distinctive—won't be confused with other diagram types
- The word "crux" conveys importance and decision-relevance
Alternative: Factor Map
Use if: You want broader accessibility outside the rationalist community. "Factor" is neutral, clear, and doesn't require explaining jargon.
Naming for Node Scores
Our diagrams include four scoring dimensions. Consider renaming to clarify uncertainty types:
| Current Name | Suggested Rename | Rationale |
|---|---|---|
novelty | Keep as is | Clear meaning |
sensitivity | Keep as is | Clear meaning |
changeability | tractability | More common in EA |
certainty | epistemic confidence | Distinguishes from aleatoric |
Implementation Checklist
If renaming to "Crux Map":
- Update
cause-effect-diagrams.mdx→crux-maps.mdx - Update component name:
CauseEffectGraph→CruxMap - Update YAML property:
causeEffectGraph→cruxMap - Update style guide references
- Update skill name:
/cause-effect-diagram→/crux-map - Update documentation throughout
Sources
Decision Analysis
Systems Thinking
Argumentation
Crucial Considerations & Crux
- Nick Bostrom on Crucial Considerations
- EA Forum: Crucial Consideration
- LessWrong: Double Crux
- CFAR Double Crux