Skip to content
Longterm Wiki
Updated 2026-02-20HistoryData
Page StatusContentIndex
Edited 6 weeks ago130 words
14QualityStub13ImportancePeripheral15ResearchMinimal
Content3/12
SummaryScheduleEntityEdit history1
Tables1Diagrams0Int. links1/ ~5Ext. links0Footnotes0References0Quotes0Accuracy0RatingsN:3 R:2 A:1 C:1
Change History1
Remove low-value validation rules and insights system#1757 weeks ago

Audited last 20 PRs for unnecessary complexity. Removed 4 low-value validation rules (entity-mentions, mermaid-style, quality-source, human-attribution) and the entire insights data layer (18K lines YAML, 6 data files, CLI commands, components, internal page). Reduces rule count from 40 to 36 and eliminates an underused data subsystem.

Insights Index

This page collects discrete insights from across the project, calibrated for AI safety researchers/experts.

DimensionQuestionScale
SurprisingWould this update an informed AI safety researcher?1-5
ImportantDoes this affect high-stakes decisions or research priorities?1-5
ActionableDoes this suggest concrete work, research, or interventions?1-5
NeglectedIs this getting less attention than it deserves?1-5

Types: claim (factual), research-gap, counterintuitive, quantitative, disagreement, neglected

See the Critical Insights framework for the theoretical basis.



Adding Insights

Insights are stored in src/data/insights.yaml. Be harsh on surprising - most well-known AI safety facts should be 1-2 for experts.

- id: "XXX"
  insight: "Your insight here - a compact, specific claim."
  source: /path/to/source-page
  tags: [relevant, tags]
  type: claim  # or: research-gap, counterintuitive, quantitative, disagreement, neglected
  surprising: 2.5  # Would update an expert? (most should be 1-3)
  important: 4.2
  actionable: 3.5
  neglected: 3.0
  compact: 4.0
  added: "2025-01-21"

Prioritize finding: counterintuitive findings, research gaps, specific quantitative claims, and neglected topics.