Skip to content
Longterm Wiki
Updated 2026-03-16HistoryData
Page StatusDocumentationDashboard
Edited 3 weeks ago
Content0/12
SummaryScheduleEntityEdit history
Tables0Diagrams0Int. links0/ ~5Ext. links0Footnotes0References0Quotes0Accuracy0

Entities & Pages

Unified view of all 1914 entities: 564 have wiki pages, 511 have importance scores, 0 have coverage data. Use preset buttons to switch views. The Overview preset defaults to pages with content; use Content Authoring to focus on coverage gaps, stale content, and citation problems.

564 of 1914 entities
Entity / page title
Entity type
Quality score (0-100)
Reader importance (0-100)
Coverage: passing items out of 13
Hallucination risk level
Time since last update
Word count
Page category
AI Timelinesconcept9593--4mo6.5kmodels
Superintelligenceconcept9295--4mo1.6krisks
Existential Risk from AIconcept9295--4mo4.0krisks
AI Scaling Lawsconcept9293--4mo2.5kmodels
Long-Timelines Technical Worldviewconcept9115--2mo4.7kworldviews
Optimistic Alignment Worldviewconcept9183--2mo4.5kworldviews
US AI Safety Instituteorganization9132--5w4.8korganizations
US Executive Order on Safe, Secure, and Trustworthy AIpolicy9157--5w4.5kresponses
Voluntary AI Safety Commitmentspolicy9150--4mo4.6kresponses
AI Governance Coordination Technologiesapproach9170--4mo2.9kresponses
AI-Human Hybrid Systemsapproach9163--4mo2.4kresponses
AI Alignmentapproach9195--2mo5.7kresponses
Scheming & Deception Detectionapproach9158--2mo3.3kresponses
Capability Elicitationapproach9150--2mo3.5kresponses
AI Safety Casesapproach9151--2mo4.1kresponses
Weak-to-Strong Generalizationapproach9120--2mo2.9kresponses
AI Safety Intervention Portfolioapproach9161--2mo2.8kresponses
Compute Thresholdsconcept9156--2mo4.0kresponses
Pause Advocacyapproach9152--2mo5.3kresponses
International Coordination Mechanismsconcept9124--2mo4.1kresponses
Sparse Autoencoders (SAEs)approach9120--2mo3.2kresponses
Eliciting Latent Knowledge (ELK)approach9124--2mo2.5kresponses
Sandboxing / Containmentapproach9158--2mo4.3kresponses
Structured Access / API-Onlyapproach9179--2mo3.5kresponses
Tool-Use Restrictionsapproach9158--2mo3.9kresponses
Deepfake Detectionapproach9122--2mo2.9kresponses
AI Authoritarian Toolsrisk9118--4mo2.9krisks
Bioweapons Riskrisk9163--4mo10.8krisks
Cyberweapons Riskrisk9183--4mo4.2krisks
AI Distributional Shiftrisk9117--4mo3.6krisks
AI-Induced Enfeeblementrisk9177--4mo2.4krisks
Erosion of Human Agencyrisk9119--4mo1.8krisks
Multipolar Trap (AI Development)risk9184--4mo3.9krisks
Reward Hackingrisk9116--4mo4.0krisks
Scientific Knowledge Corruptionrisk9138--4mo1.9krisks
AI Model Steganographyrisk9170--2mo2.4krisks
AI-Enabled Untraceable Misuserisk8848--2mo2.8krisks
OpenAI Foundationorganization8787--5w9.0korganizations
FTX Collapse: Lessons for EA Funding Resilienceconcept7865--2mo5.7korganizations
AI Compute Scaling Metricsanalysis7882--2mo3.5kmodels
Centre for Effective Altruismorganization7842--2mo2.0korganizations
Redwood Researchorganization7832--2mo1.5korganizations
Sleeper Agents: Training Deceptive LLMsrisk7817--2mo1.8krisks
FAR AIorganization7685--4mo3.2korganizations
OpenAI Foundation Governance Paradoxanalysis7540--2mo2.6korganizations
AI Controlresearch-area7569--5w3.1kresponses
State Capacity and AI Governanceconcept7572--2mo2.4kresponses
Deceptive Alignmentrisk7519--4mo2.0krisks
Relative Longtermist Value Comparisonsanalysis7468--2mo2.5kmodels
Anthropicorganization7452--4mo5.1korganizations
Rows per page:
Page 1 of 12