Longterm Wiki

Cause-Effect Diagram Style Guide

Quick Reference

Cause-effect diagrams are defined in YAML under the causeEffectGraph property of entities:

- id: tmc-compute
  type: ai-transition-model-subitem
  title: Compute
  causeEffectGraph:
    title: "What Drives Effective AI Compute?"
    description: "Causal factors affecting frontier AI training compute."
    primaryNodeId: effective-compute
    nodes:
      - id: effective-compute
        label: Effective Compute
        type: effect
        description: Net compute available for AI training.
    edges:
      - source: chip-supply
        target: effective-compute
        strength: strong
        effect: increases

View all diagrams: Diagrams Schema

Node Types

Nodes are organized into four hierarchical types that flow from root causes to effects:

TypePurposeColorShape
leafRoot inputs, external factorsTeal/cyanRounded rectangle
causeDerived factors from leavesLight graySlightly rounded
intermediateDirect contributing factorsDarker grayLess rounded
effectTarget outcome(s)Amber/yellowSharp corners
Visual Hierarchy

The color gradient (teal → gray → amber) creates a visual flow from inputs to outputs. Shapes also encode function: rounder = more fundamental, sharper = final outcomes.

Node Properties

PropertyRequiredDescription
idYesUnique identifier (kebab-case)
labelYesDisplay text (2-5 words)
typeYesOne of: leaf, cause, intermediate, effect
descriptionNoBrief explanation (shown on hover)
confidenceNo0-1 confidence in this node's relevance
colorNoSemantic color name (see Semantic Colors)
scoresNoRating dimensions (see Node Scores)

Edge Properties

Edges represent causal relationships between nodes:

PropertyValuesVisual Effect
strengthweak, medium, strongLine thickness (1.2px, 2px, 3px)
confidencelow, medium, highLine style (dashed, semi-dashed, solid)
effectincreases, decreases, mixedArrow color (blue, red, gray)
Edge Direction

Edges flow from cause to effect. The source node influences the target node.

Edge Weight Guidelines

StrengthUse When
strongDirect, well-established causal link
mediumMeaningful but indirect influence
weakMinor or speculative connection

Semantic Colors

Nodes can be assigned semantic colors to convey meaning at a glance. Use colors sparingly—about half of nodes should remain grey/neutral to make colored nodes stand out.

Available Colors

ColorUse ForExample Nodes
roseRisk factors, concerning inputs"Actor Intent", "AI Biology Knowledge"
redMajor threat domains, severe risks"Biological Threat", "Cyber Threat"
emeraldDefensive factors, countermeasures"Biosecurity Defenses", "Export Controls"
greenPositive interventionsAlternative to emerald
violetKey uncertainties, critical questions"Does offense or defense win?"
purpleAlternative uncertainty colorSimilar to violet
tealPower centers, key actors"Company Ownership", "Country Control"
blueStructural/policy factors"Cloud Partnerships", "Corporate Structures"
amberWarnings, caution(Effect nodes use amber by default)
slateNeutral, less critical nodesTechnical details, well-known factors
grayAlternative neutralSimilar to slate
Color Guidelines
  • Color ~50% of nodes; leave the rest grey/slate
  • Reserve colors for most critical nodes (high sensitivity, novelty, or importance)
  • Use slate for well-known factors (low novelty) or less decisive nodes
  • Intermediates that just aggregate other nodes often work well as slate

Example

nodes:
  - id: key-threat
    label: Major Threat
    type: intermediate
    color: red
    description: Critical danger node
  - id: countermeasure
    label: Defense System
    type: leaf
    color: emerald
    description: Mitigating factor
  - id: technical-detail
    label: Implementation Detail
    type: leaf
    color: slate
    description: Less critical, well-known

Node Scores

Nodes can have four scoring dimensions (1-10 scale) that enable interactive visualization modes in the diagram viewer.

Score Dimensions

DimensionScaleDescription
novelty1-10How surprising to an informed reader (1=common knowledge, 10=very surprising)
sensitivity1-10How much downstream nodes change if this changes (1=minimal impact, 10=huge cascading effects)
changeability1-10How tractable/malleable is this factor (1=fixed/immutable, 10=highly changeable)
certainty1-10How well understood (1=deep uncertainty, 10=well established)
View Modes

When viewing a diagram with scores, use the view mode buttons to highlight nodes by score dimension. Higher-scoring nodes appear more prominent; lower-scoring nodes fade out.

Example

nodes:
  - id: ai-biology-knowledge
    label: AI Biology Knowledge
    type: leaf
    description: LLM knowledge of virology and synthesis routes.
    scores:
      novelty: 5        # Discussed but details less known
      sensitivity: 8    # Key enabler for bio-threat path
      changeability: 3  # Hard to roll back model knowledge
      certainty: 5      # Some benchmarks but real-world uplift uncertain
    color: rose

Scoring Guidelines

ScoreNoveltySensitivityChangeabilityCertainty
1-3Common knowledgeMinimal downstream impactFixed/immutableDeep uncertainty
4-6Somewhat familiarModerate impactSomewhat malleableMixed evidence
7-10Surprising insightHuge cascading effectsHighly tractableWell established

Layout Best Practices

Node Count Limits

GuidelineRecommendation
Maximum nodes15-20 (beyond this, split into multiple diagrams)
Maximum edges30-40 (avoid spaghetti)
Nodes per layer4-5 max horizontal nodes per type

Layer Organization

Organize nodes by their causal position:

Layer 1: leaf nodes (root causes, external factors)
    ↓
Layer 2: cause nodes (derived from leaves)
    ↓
Layer 3: intermediate nodes (direct factors)
    ↓
Layer 4: effect nodes (target outcomes)
Avoid Feedback Loops

The diagram renderer uses Dagre (DAG layout). Cycles create messy diagonal lines. If your model has feedback loops, note them in the description but omit them from the edges.

When to Use Each Diagram Type

Use CaseDiagram Type
Causal model with 5-20 factorsCauseEffectGraph
Simple flow (3-5 steps)Mermaid flowchart
Taxonomy/hierarchyTable or Mermaid
ProportionsMermaid pie chart
2x2 matrixMermaid quadrant
TimelineMermaid timeline
Choose CauseEffectGraph When
  • You're modeling causal relationships (not just categories)
  • You need edge properties (strength, confidence, direction)
  • The diagram will be reused across pages
  • You want interactive hover details

YAML Schema Reference

causeEffectGraph:
  title: "Diagram Title"           # Optional, falls back to entity title
  description: "Brief explanation" # Optional, shown in header
  primaryNodeId: "main-effect"     # Optional, highlights this node

  nodes:
    - id: node-id                  # Required: unique identifier
      label: "Node Label"          # Required: display text
      type: leaf                   # Required: leaf|cause|intermediate|effect
      description: "Hover text"    # Optional: explanation
      confidence: 0.8              # Optional: 0-1 confidence
      color: rose                  # Optional: semantic color (rose|red|emerald|violet|teal|blue|slate|etc)
      scores:                      # Optional: rating dimensions (1-10)
        novelty: 5
        sensitivity: 8
        changeability: 3
        certainty: 5

  edges:
    - source: cause-node           # Required: source node id
      target: effect-node          # Required: target node id
      strength: strong             # Optional: weak|medium|strong (default: medium)
      confidence: high             # Optional: low|medium|high (default: medium)
      effect: increases            # Optional: increases|decreases|mixed (default: increases)
      label: "edge label"          # Optional: text on edge (rarely used)

Common Patterns

Simple Causal Chain

nodes:
  - { id: a, label: Root Cause, type: leaf }
  - { id: b, label: Mediator, type: intermediate }
  - { id: c, label: Outcome, type: effect }
edges:
  - { source: a, target: b, strength: strong }
  - { source: b, target: c, strength: strong }

Multiple Converging Causes

nodes:
  - { id: a, label: Factor A, type: leaf }
  - { id: b, label: Factor B, type: leaf }
  - { id: c, label: Factor C, type: leaf }
  - { id: outcome, label: Outcome, type: effect }
edges:
  - { source: a, target: outcome, strength: strong }
  - { source: b, target: outcome, strength: medium }
  - { source: c, target: outcome, strength: weak }

Hierarchical Factors

nodes:
  # Root inputs
  - { id: input-1, label: External Factor, type: leaf }
  - { id: input-2, label: Resource, type: leaf }
  # Derived factors
  - { id: derived, label: Combined Effect, type: cause }
  # Direct factors
  - { id: direct-1, label: Capability, type: intermediate }
  - { id: direct-2, label: Constraint, type: intermediate }
  # Target
  - { id: outcome, label: Final Outcome, type: effect }
edges:
  - { source: input-1, target: derived }
  - { source: input-2, target: derived }
  - { source: derived, target: direct-1 }
  - { source: derived, target: direct-2 }
  - { source: direct-1, target: outcome, strength: strong }
  - { source: direct-2, target: outcome, strength: medium, effect: decreases }

Anti-Patterns

Don't: Too Many Leaf Nodes

Having 8+ leaf nodes creates a very wide top layer:

# Bad: 8 parallel leaf nodes
nodes:
  - { id: a, type: leaf }
  - { id: b, type: leaf }
  - { id: c, type: leaf }
  # ... 5 more

Fix: Group related factors into intermediate nodes, or split into multiple diagrams.

Don't: Feedback Loops in Edges

# Bad: Creates cycles
edges:
  - { source: a, target: b }
  - { source: b, target: c }
  - { source: c, target: a }  # Loop!

Fix: Remove the loop edge and note the feedback in the description.

Don't: Long Node Labels

# Bad
- { id: x, label: "This is a very long label explaining everything" }

# Good
- { id: x, label: "Key Factor", description: "Detailed explanation here" }

Don't: All Same Strength

# Bad: No visual hierarchy
edges:
  - { source: a, target: x, strength: medium }
  - { source: b, target: x, strength: medium }
  - { source: c, target: x, strength: medium }

# Good: Differentiate importance
edges:
  - { source: a, target: x, strength: strong }
  - { source: b, target: x, strength: medium }
  - { source: c, target: x, strength: weak }

Integration with Pages

Using TransitionModelContent

For AI Transition Model pages, diagrams are automatically rendered by TransitionModelContent:


<TransitionModelContent slug="compute" />

Standalone Viewer

Link to the standalone diagram viewer for presentations or full-screen viewing:

/diagrams?entity=tmc-compute

Color Reference

Node Colors by Type

TypeBackgroundTextBorder
leaf#ccfbf1 (teal-100)#0f766e (teal-700)#5eead4 (teal-300)
cause#e2e8f0 (slate-200)#334155 (slate-700)#94a3b8 (slate-400)
intermediate#cbd5e1 (slate-300)#1e293b (slate-800)#64748b (slate-500)
effect#fef3c7 (amber-100)#92400e (amber-800)#fbbf24 (amber-400)

Edge Colors by Effect

EffectColorMeaning
increasesBluePositive correlation
decreasesRedNegative correlation
mixedGrayContext-dependent