Longterm Wiki

Update Schedule

Pages ranked by update priority. Priority is calculated as staleness (days since edit / update frequency) weighted by importance. 65 pages are overdue.

468 of 468 results
AI-Enabled Untraceable MisuseMonthly365d ago49w overdue558.03
Corporate AI Safety ResponsesWeekly16d ago9d overdue781.78
Compute Monitoring3 weeks47d ago26d overdue781.75
AI Alignment Research Agenda Comparison3 weeks47d ago26d overdue781.75
Dario AmodeiWeekly51d ago6w overdue231.68
AI Misuse Risk CruxesWeekly14d ago7d overdue821.64
AI Chip Export Controls3 weeks47d ago26d overdue721.61
International Compute Regimes3 weeks47d ago26d overdue721.61
Seoul AI Safety Summit Declaration3 weeks47d ago26d overdue681.52
Anthropic Core Views3 weeks47d ago26d overdue671.50
California SB 53Weekly12d ago5d overdue851.46
Geopolitics & CoordinationWeekly15d ago8d overdue671.44
US Executive Order on Safe, Secure, and Trustworthy AIWeekly14d ago7d overdue721.44
Autonomous Weapons3 weeks47d ago26d overdue641.43
AI Risk Public Education3 weeks48d ago27d overdue621.42
UK AI Safety Institute3 weeks47d ago26d overdue621.39
EU AI ActWeekly12d ago5d overdue781.34
Disinformation3 weeks51d ago30d overdue541.31
ARC (Alignment Research Center)3 weeks51d ago30d overdue531.29
Safe Superintelligence Inc (SSI)Weekly12d ago5d overdue751.29
New York RAISE ActWeekly12d ago5d overdue751.29
California SB 1047Weekly12d ago5d overdue741.27
Deepfakes3 weeks51d ago30d overdue521.26
Google DeepMindWeekly15d ago12d overdue251.25
Council of Europe Framework Convention on Artificial IntelligenceWeekly12d ago5d overdue651.11
OpenAI FoundationWeekly9d ago2d overdue851.09
OpenAI Foundation Governance ParadoxWeekly9d ago2d overdue821.05
CAIS (Center for AI Safety)3 weeks51d ago30d overdue421.02
AI-Powered Fraud3 weeks51d ago30d overdue421.02
Musk v. OpenAI LawsuitWeekly9d ago2d overdue781.00
GovAI3 weeks47d ago26d overdue420.94
CHAI (Center for Human-Compatible AI)3 weeks51d ago30d overdue380.92
Microsoft AIWeekly15d ago8d overdue420.90
AI Safety Solution Cruxes6 weeks47d ago2d overdue820.86
Frontier AI Company Comparison (2026)Weekly9d ago2d overdue650.84
AI Safety Institutes6 weeks47d ago2d overdue780.81
AI Structural Risk Cruxes6 weeks47d ago2d overdue740.77
Texas TRAIGA Responsible AI Governance ActWeekly12d ago5d overdue450.77
AI Flash Dynamics6 weeks47d ago2d overdue740.77
AI-Powered Consensus Manufacturing6 weeks47d ago2d overdue730.76
Corporate Influence on AI Policy6 weeks47d ago2d overdue720.75
AI-Era Epistemic Security6 weeks47d ago2d overdue720.75
Open Source AI Safety6 weeks47d ago2d overdue720.75
AI Standards Bodies6 weeks47d ago2d overdue720.75
AI-Driven Institutional Decision Capture6 weeks47d ago2d overdue720.75
AI-Induced Irreversibility6 weeks47d ago2d overdue720.75
AI Value Lock-in6 weeks47d ago2d overdue720.75
AI Winner-Take-All Dynamics6 weeks51d ago6d overdue640.73
AI Content Authentication6 weeks47d ago2d overdue640.67
AI-Era Epistemic Infrastructure6 weeks47d ago2d overdue640.67
Is Interpretability Sufficient for Safety?6 weeks47d ago2d overdue620.65
AI Control3 weeks16d ago5d850.65
AI-Augmented Forecasting6 weeks48d ago3d overdue610.65
AI-Assisted Deliberation Platforms6 weeks47d ago2d overdue620.65
Epistemic Sycophancy6 weeks47d ago2d overdue620.65
Evals & Red-teaming3 weeks16d ago5d820.62
AI Evaluation3 weeks16d ago5d820.62
Dangerous Capability Evaluations3 weeks15d ago6d840.60
Long-Horizon Autonomous Tasks3 weeks15d ago6d820.59
Self-Improvement and Recursive Enhancement3 weeks15d ago6d820.59
Situational Awareness3 weeks15d ago6d830.59
METR3 weeks15d ago6d820.59
AI-Assisted Alignment3 weeks15d ago6d820.59
AI Alignment3 weeks14d ago7d880.59
AI Governance and Policy3 weeks15d ago6d820.59
Mechanistic Interpretability3 weeks15d ago6d820.59
Pause / Moratorium3 weeks16d ago5d780.59
Responsible Scaling Policies3 weeks15d ago6d820.59
Sleeper Agent Detection3 weeks15d ago6d820.59
Technical AI Safety Research3 weeks15d ago6d820.59
AI Knowledge Monopoly6 weeks51d ago6d overdue520.59
Autonomous Coding3 weeks15d ago6d810.58
AI Safety Intervention Portfolio3 weeks14d ago7d870.58
Model Registries3 weeks16d ago5d750.57
Scheming & Deception Detection3 weeks14d ago7d850.57
Persuasion and Social Manipulation3 weeks15d ago6d780.56
AI Capabilities Metrics3 weeks15d ago6d780.56
Alignment Evaluations3 weeks15d ago6d780.56
Circuit Breakers / Inference Interventions3 weeks16d ago5d740.56
Evals-Based Deployment Gates3 weeks15d ago6d780.56
Hardware-Enabled Governance3 weeks15d ago6d780.56
Mechanistic Interpretability3 weeks15d ago6d780.56
Red Teaming3 weeks16d ago5d740.56
Responsible Scaling Policies3 weeks15d ago6d780.56
Tool-Use Restrictions3 weeks15d ago6d780.56
Voluntary Industry Commitments3 weeks15d ago6d780.56
Large Language Models3 weeks14d ago7d820.55
Reasoning and Planning3 weeks14d ago7d820.55
Alignment Progress3 weeks14d ago7d820.55
Capability Elicitation3 weeks14d ago7d820.55
Representation Engineering3 weeks16d ago5d720.55
AI Safety Cases3 weeks14d ago7d820.55
Bioweapons3 weeks14d ago7d820.55
International Coordination Mechanisms3 weeks15d ago6d750.54
Third-Party Model Auditing3 weeks15d ago6d750.54
Multi-Agent Safety3 weeks15d ago6d760.54
Epistemic Learned Helplessness6 weeks42d ago3d580.54
AGI Timeline3 weeks15d ago6d740.53
Tool Use and Computer Use3 weeks14d ago7d780.52
Sandboxing / Containment3 weeks14d ago7d780.52
Structured Access / API-Only3 weeks14d ago7d780.52
Compute Thresholds3 weeks14d ago7d780.52
Large Language Models3 weeks15d ago6d720.51
Heavy Scaffolding / Agentic Systems3 weeks15d ago6d720.51
Expert Opinion3 weeks15d ago6d710.51
International AI Safety Summits3 weeks15d ago6d720.51
AI Trust Cascade Failure6 weeks51d ago6d overdue450.51
Dense Transformers3 weeks14d ago7d730.49
Leading the Future super PAC3 weeks12d ago9d850.49
AI Output Filtering3 weeks16d ago5d640.49
Open vs Closed Source AI3 weeks15d ago6d670.48
Safety Research & Resources3 weeks14d ago7d720.48
Sparse Autoencoders (SAEs)3 weeks14d ago7d720.48
AI-Induced Cyber Psychosis6 weeks51d ago6d overdue420.48
Cyberweapons3 weeks14d ago7d720.48
Historical Revisionism6 weeks51d ago6d overdue420.48
AI-Driven Legal Evidence Crisis6 weeks51d ago6d overdue420.48
Mass Surveillance3 weeks15d ago6d670.48
AGI Development3 weeks16d ago5d620.47
Light Scaffolding3 weeks16d ago5d620.47
World Models + Planning3 weeks16d ago5d620.47
Probing / Linear Probes3 weeks16d ago5d620.47
Defense in Depth ModelQuarterly49d ago6w820.45
Risk Activation Timeline ModelQuarterly49d ago6w820.45
US AI Safety Institute3 weeks14d ago7d680.45
Government Regulation vs Industry Self-Governance3 weeks15d ago6d620.44
Pause and Redirect - The Deliberate Path3 weeks15d ago6d620.44
Apollo Research3 weeks15d ago6d620.44
AI Labor Transition & Economic Resilience6 weeks47d ago2d overdue420.44
AI Model Specifications3 weeks15d ago6d610.44
Authoritarian Tools3 weeks15d ago6d620.44
AI Risk Portfolio AnalysisQuarterly47d ago6w820.43
Capabilities-to-Safety Pipeline ModelQuarterly48d ago6w810.43
Capability-Alignment Race ModelQuarterly47d ago6w820.43
Capability Threshold ModelQuarterly47d ago6w820.43
Intervention Timing WindowsQuarterly47d ago6w820.43
Racing Dynamics Impact ModelQuarterly50d ago6w780.43
ControlAI3 weeks12d ago9d750.43
Frontier Model Forum3 weeks12d ago9d750.43
Meta AI (FAIR)Weekly2d ago1d650.43
NIST and AI Safety3 weeks12d ago9d750.43
OpenAIWeekly2d ago1d650.43
Palisade Research3 weeks12d ago9d750.43
David Sacks (White House AI Czar)3 weeks12d ago9d750.43
Demis Hassabis6 weeks47d ago2d overdue410.43
Marc Andreessen (AI Investor)3 weeks12d ago9d750.43
Bletchley Declaration3 weeks12d ago9d750.43
Compute Governance: AI Chips Export Controls Policy3 weeks12d ago9d750.43
AI-Bioweapons Timeline ModelQuarterly49d ago6w780.42
Autonomous Cyber Attack TimelineQuarterly48d ago6w780.42
Instrumental Convergence FrameworkQuarterly49d ago6w780.42
Risk Cascade PathwaysQuarterly49d ago6w780.42
Risk Interaction NetworkQuarterly48d ago6w780.42
Safety Research Allocation ModelQuarterly48d ago6w780.42
AI Safety Talent Supply/Demand Gap ModelQuarterly48d ago6w780.42
Warning Signs ModelQuarterly49d ago6w780.42
Worldview-Intervention MappingQuarterly49d ago6w780.42
Autonomous Weapons Escalation ModelQuarterly48d ago6w770.41
Bioweapons Attack Chain ModelQuarterly49d ago6w760.41
Multipolar Trap Dynamics ModelQuarterly49d ago6w760.41
AI Proliferation Risk ModelQuarterly49d ago6w760.41
Expected Value of AI Safety ResearchQuarterly49d ago6w750.41
Epoch AI3 weeks14d ago7d620.41
Flash Dynamics Threshold ModelQuarterly49d ago6w730.40
Risk Interaction Matrix ModelQuarterly47d ago6w760.40
Redwood Research3 weeks13d ago8d650.40
Scalable OversightQuarterly47d ago6w770.40
Authentication Collapse Timeline ModelQuarterly49d ago6w720.39
Authoritarian Tools Diffusion ModelQuarterly49d ago6w720.39
AI Risk Critical Uncertainties ModelQuarterly47d ago6w740.39
Cyber Offense-Defense Balance ModelQuarterly49d ago6w720.39
Deceptive Alignment Decomposition ModelQuarterly49d ago6w720.39
Goal Misgeneralization Probability ModelQuarterly49d ago6w720.39
Irreversibility Threshold ModelQuarterly49d ago6w720.39
Mesa-Optimization Risk AnalysisQuarterly49d ago6w710.39
Winner-Take-All Concentration ModelQuarterly49d ago6w720.39
International AI Coordination GameQuarterly48d ago6w720.38
Multi-Actor Strategic LandscapeQuarterly47d ago6w720.38
Regulatory Capacity Threshold ModelQuarterly46d ago6w740.38
Reward Hacking Taxonomy and Severity ModelQuarterly47d ago6w720.38
AI Surveillance and Regime Durability ModelQuarterly47d ago6w720.38
Technical Pathway DecompositionQuarterly47d ago6w720.38
Whistleblower Dynamics ModelQuarterly48d ago6w720.38
Sam AltmanWeekly12d ago5d overdue220.38
Claude Code Espionage Incident (2025)3 weeks12d ago9d650.37
LAWS Proliferation ModelQuarterly49d ago6w680.37
Parameter Interaction NetworkQuarterly46d ago6w720.37
Sycophancy Feedback Loop ModelQuarterly50d ago6w670.37
MATS ML Alignment Theory Scholars program3 weeks12d ago9d650.37
Why Alignment Might Be EasyQuarterly47d ago6w680.36
Automation Bias Cascade ModelQuarterly48d ago6w680.36
Compounding Risks AnalysisQuarterly49d ago6w670.36
Elon Musk (AI Industry)Weekly12d ago5d overdue210.36
Epistemic Collapse Threshold ModelQuarterly49d ago6w620.34
Expertise Atrophy Cascade ModelQuarterly49d ago6w620.34
Agent FoundationsQuarterly47d ago6w650.34
Surveillance Chilling Effects ModelQuarterly48d ago6w620.33
Why Alignment Might Be HardQuarterly47d ago6w620.32
Economic & Labor Metrics3 weeks16d ago5d420.32
80,000 Hours3 weeks15d ago6d420.30
xAIWeekly2d ago1d450.30
Deceptive Alignment6 weeks16d ago4w850.30
Biological / Organoid Computing3 weeks16d ago5d380.29
Expertise Atrophy Progression ModelQuarterly50d ago6w520.29
Media-Policy Feedback Loop ModelQuarterly48d ago6w540.29
Power-Seeking AI6 weeks15d ago4w870.29
AI Development Racing Dynamics6 weeks16d ago4w820.29
Treacherous Turn6 weeks15d ago4w870.29
The Case AGAINST AI Existential RiskQuarterly47d ago6w540.28
Post-Incident Recovery ModelQuarterly48d ago6w520.28
RLHF / Constitutional AI6 weeks15d ago4w830.28
AI Safety Training Programs6 weeks16d ago4w750.27
Corrigibility Failure6 weeks15d ago4w820.27
Instrumental Convergence6 weeks15d ago4w820.27
Mesa-Optimization6 weeks15d ago4w820.27
AI Capability Sandbagging6 weeks15d ago4w820.27
Scheming6 weeks15d ago4w820.27
Sharp Left Turn6 weeks15d ago4w810.27
AI Epistemic Cruxes6 weeks16d ago4w720.26
Provable / Guaranteed Safe AI6 weeks16d ago4w720.26
Disinformation Detection Arms Race ModelQuarterly48d ago6w480.26
Global Partnership on Artificial Intelligence (GPAI)3 weeks12d ago9d450.26
Seldon Lab3 weeks12d ago9d450.26
Capability Unlearning / Removal6 weeks16d ago4w720.26
Constitutional AI6 weeks16d ago4w720.26
Eval Saturation & The Evals Gap3 weeks6d ago2w900.26
Evaluation Awareness3 weeks6d ago2w900.26
AI Lab Safety Culture6 weeks15d ago4w780.26
Process Supervision6 weeks16d ago4w720.26
Refusal Training6 weeks16d ago4w720.26
AI-Enabled Authoritarian Takeover6 weeks15d ago4w780.26
AI-Driven Concentration of Power6 weeks16d ago4w720.26
Emergent Capabilities6 weeks15d ago4w780.26
Goal Misgeneralization6 weeks15d ago4w780.26
Multipolar Trap (AI Development)6 weeks14d ago4w820.26
Proliferation6 weeks15d ago4w780.26
Deep Learning Revolution (2012-2020)Quarterly51d ago6w440.25
Deepfakes Authentication Crisis ModelQuarterly48d ago6w470.25
Nick Bostrom6 weeks51d ago6d overdue220.25
Scalable Eval Approaches3 weeks6d ago2w880.25
Reward Hacking6 weeks14d ago4w790.25
AI Accident Risk Cruxes6 weeks14d ago4w780.24
Misaligned Catastrophe - The Bad Ending6 weeks15d ago4w720.24
Slow Takeoff Muddle - Muddling Through6 weeks15d ago4w720.24
Epistemic & Forecasting Organizations6 weeks15d ago4w720.24
Future of Humanity Institute (FHI)3 weeks15d ago6d340.24
Pause AI3 weeks12d ago9d420.24
Biosecurity Interventions3 weeks7d ago14d720.24
AI Governance Coordination Technologies6 weeks14d ago4w770.24
Is EA Biosecurity Work Limited to Restricting LLM Biological Use?3 weeks7d ago14d720.24
AI Safety Field Building Analysis6 weeks15d ago4w720.24
Pause Advocacy6 weeks14d ago4w780.24
Preference Optimization Methods6 weeks15d ago4w720.24
Weak-to-Strong Generalization6 weeks14d ago4w770.24
AI Whistleblower Protections6 weeks15d ago4w720.24
Governance-Focused Worldview6 weeks15d ago4w720.24
MIRI (Machine Intelligence Research Institute)3 weeks13d ago8d370.23
Chris Olah6 weeks51d ago6d overdue200.23
Paul Christiano6 weeks42d ago3d250.23
Sleeper Agents: Training Deceptive LLMs6 weeks12d ago5w850.23
Mainstream Era (2020-Present)Quarterly47d ago6w420.22
Collective Intelligence / Coordination6 weeks16d ago4w620.22
Neuro-Symbolic Hybrid Systems6 weeks16d ago4w620.22
Novel / Unknown Approaches6 weeks16d ago4w620.22
Fraud Sophistication Curve ModelQuarterly48d ago6w420.22
Adversarial Training6 weeks16d ago4w620.22
AI-Human Hybrid Systems6 weeks14d ago4w720.22
Prediction Markets (AI Forecasting)6 weeks16d ago4w620.22
Reward Modeling6 weeks16d ago4w620.22
AI Distributional Shift6 weeks14d ago4w720.22
AI Preference Manipulation6 weeks16d ago4w620.22
Sycophancy6 weeks16d ago4w620.22
Multipolar Competition - The Fragmented World6 weeks15d ago4w620.21
Lab Behavior & IndustryWeekly2d ago5d720.21
LongtermWiki Impact Model3 weeks9d ago12d500.21
Coefficient Giving6 weeks15d ago4w620.21
Founders Fund3 weeks10d ago11d450.21
Lionheart Ventures3 weeks10d ago11d450.21
Value Aligned Research Advisors3 weeks11d ago10d400.21
China AI RegulationsWeekly2d ago5d720.21
Failed and Stalled AI Policy ProposalsWeekly2d ago5d720.21
ForecastBench6 weeks15d ago4w620.21
US State AI LegislationWeekly2d ago5d720.21
XPT (Existential Risk Persuasion Tournament)6 weeks15d ago4w640.21
Authentication Collapse6 weeks15d ago4w620.21
AI-Induced Expertise Atrophy6 weeks16d ago4w580.21
AI-Driven Trust Decline6 weeks15d ago4w620.21
Early Warnings (1950s-2000)Quarterly52d ago5w350.20
Conjecture3 weeks15d ago6d280.20
FAR AI3 weeks13d ago8d320.20
Philip Tetlock (Forecasting Pioneer)6 weeks12d ago5w750.20
AI Welfare and Digital Minds6 weeks12d ago5w750.20
AI-Induced Enfeeblement6 weeks14d ago4w640.20
When Will AGI Arrive?6 weeks16d ago4w540.19
State-Space Models / Mamba6 weeks16d ago4w540.19
Centre for Effective Altruism6 weeks13d ago5w650.19
Venture Capital3 weeks10d ago11d400.19
Eli Lifland6 weeks12d ago5w720.19
Automation Bias (AI Systems)6 weeks15d ago4w580.19
Epistemic Collapse6 weeks13d ago5w650.19
Scientific Knowledge Corruption6 weeks14d ago4w620.19
Long-Timelines Technical Worldview6 weeks14d ago4w620.19
Optimistic Alignment Worldview6 weeks14d ago4w620.19
Public Opinion & Awareness6 weeks16d ago4w520.18
Meta & Structural Indicators6 weeks16d ago4w520.18
Forecasting Research Institute6 weeks15d ago4w540.18
Ilya Sutskever3 weeks15d ago6d250.18
Max Tegmark6 weeks11d ago5w750.18
Colorado AI Act (SB 205)Weekly2d ago5d620.18
NIST AI Risk Management FrameworkWeekly2d ago5d620.18
AI Model Steganography6 weeks14d ago4w580.18
Brain-Computer Interfaces3 weeks16d ago5d220.17
Longtermist Funders6 weeks10d ago5w750.17
Long-Term Benefit Trust (Anthropic)6 weeks10d ago5w780.17
Long-Term Future Fund (LTFF)6 weeks15d ago4w520.17
Robin Hanson6 weeks12d ago5w650.17
Deepfake Detection6 weeks14d ago4w540.17
The MIRI Era (2000-2015)Quarterly51d ago6w280.16
AI Futures Project3 weeks12d ago9d280.16
AI Impacts6 weeks11d ago5w650.16
AI Revenue Sources3 weeks4d ago2w820.16
CSET (Center for Security and Emerging Technology)3 weeks15d ago6d230.16
Elon Musk (Funder)6 weeks10d ago5w700.16
Dustin Moskovitz (AI Safety Funder)6 weeks10d ago5w700.16
Jan Leike3 weeks15d ago6d220.16
Erosion of Human Agency6 weeks14d ago4w520.16
Aligned AGI - The Good Ending6 weeks14d ago4w480.15
Minimal Scaffolding6 weeks16d ago4w420.15
Neuromorphic Hardware6 weeks16d ago4w420.15
Carlsmith's Six-Premise ArgumentQuarterly16d ago11w820.15
Survival and Flourishing Fund (SFF)6 weeks15d ago4w440.15
MIT AI Risk Repository6 weeks11d ago5w600.15
AI-Accelerated Reality Fragmentation6 weeks13d ago5w520.15
Parameter Table6 weeks16d ago4w420.15
The Case FOR AI Existential RiskQuarterly15d ago11w870.14
Is Scaling All You Need?6 weeks15d ago4w420.14
Corrigibility Failure PathwaysQuarterly16d ago11w760.14
Institutional Adaptation Speed ModelQuarterly16d ago11w780.14
Power-Seeking Emergence Conditions ModelQuarterly16d ago11w780.14
Epoch AI6 weeks15d ago4w420.14
Future of Life Institute (FLI)6 weeks15d ago4w420.14
Lightning Rod Labs3 weeks12d ago9d250.14
Metaculus6 weeks15d ago4w420.14
Peter Thiel (Funder)6 weeks10d ago5w650.14
Schmidt Futures6 weeks10d ago5w650.14
SecureBio6 weeks13d ago5w480.14
The Sequences by Eliezer Yudkowsky6 weeks13d ago5w500.14
AI Forecasting Benchmark Tournament6 weeks15d ago4w420.14
AI-Driven Economic Disruption6 weeks15d ago4w430.14
Should We Pause AI Development?6 weeks14d ago4w420.13
Alignment Robustness TrajectoryQuarterly16d ago11w720.13
Feedback Loop & Cascade ModelQuarterly16d ago11w720.13
Safety Culture EquilibriumQuarterly16d ago11w720.13
Trust Cascade Failure ModelQuarterly16d ago11w710.13
Anthropic Pre-IPO DAF Transfers6 weeks8d ago5w720.13
Centre for Long-Term Resilience6 weeks8d ago5w720.13
CSER (Centre for the Study of Existential Risk)3 weeks12d ago9d220.13
Johns Hopkins Center for Health Security6 weeks8d ago5w750.13
Secure AI Project3 weeks12d ago9d220.13
X Community Notes6 weeks14d ago4w420.13
Corrigibility ResearchQuarterly15d ago11w780.13
AI Safety via DebateQuarterly16d ago11w720.13
Formal Verification (AI Safety)Quarterly16d ago11w720.13
Goal Misgeneralization ResearchQuarterly16d ago11w720.13
Provably Safe AI (davidad agenda)Quarterly16d ago11w720.13
Squiggle6 weeks15d ago4w380.13
Arb Research6 weeks12d ago5w450.12
Biosecurity Organizations6 weeks8d ago5w700.12
FutureSearch6 weeks12d ago5w450.12
Good Judgment (Forecasting)6 weeks12d ago5w450.12
Longview Philanthropy6 weeks15d ago4w370.12
Manifund6 weeks15d ago4w360.12
Vitalik Buterin (Funder)6 weeks10d ago5w550.12
Nuño Sempere6 weeks12d ago5w450.12
Vidur Kapur6 weeks12d ago5w450.12
AI for Human Reasoning Fellowship6 weeks12d ago5w450.12
Canada AIDAWeekly2d ago5d420.12
Eliciting Latent Knowledge (ELK)Quarterly14d ago11w770.12
Stampy / AISafety.info6 weeks11d ago5w500.12
AI Doomer Worldview6 weeks15d ago4w350.12
Electoral Impact Assessment ModelQuarterly16d ago11w620.11
Intervention Effectiveness MatrixQuarterly11d ago11w870.11
Societal Response & Adaptation ModelQuarterly16d ago11w620.11
Cooperative IRL (CIRL)Quarterly16d ago11w620.11
Cooperative AIQuarterly16d ago11w620.11
AI Uplift Assessment ModelQuarterly11d ago11w780.10
Model Organisms of MisalignmentQuarterly12d ago11w720.10
Safety-Capability Tradeoff ModelQuarterly11d ago11w780.10
Scheming Likelihood AssessmentQuarterly11d ago11w780.10
Short Timeline Policy ImplicationsQuarterly11d ago11w780.10
Trust Erosion Dynamics ModelQuarterly16d ago11w560.10
Giving Pledge6 weeks10d ago5w450.10
William and Flora Hewlett Foundation6 weeks10d ago5w450.10
NTI | bio (Nuclear Threat Initiative - Biological Program)6 weeks8d ago5w550.10
Is AI Existential Risk Real?6 weeks16d ago4w250.09
Whole Brain Emulation6 weeks16d ago4w240.09
Manifest (Forecasting Conference)6 weeks11d ago5w350.09
QURI (Quantified Uncertainty Research Institute)6 weeks15d ago4w280.09
SecureDNA6 weeks8d ago5w500.09
Swift Centre6 weeks12d ago5w350.09
Design Sketches for Collective Epistemics6 weeks7d ago5w550.09
Epistemic Virtue Evals6 weeks7d ago5w550.09
Blueprint Biosecurity6 weeks8d ago5w440.08
Chan Zuckerberg Initiative6 weeks10d ago5w350.08
Coalition for Epidemic Preparedness Innovations6 weeks8d ago5w450.08
EA Global6 weeks10d ago5w350.08
IBBIS (International Biosecurity and Biosafety Initiative for Science)6 weeks8d ago5w450.08
Holden Karnofsky6 weeks15d ago4w250.08
Issa Rice6 weeks10d ago5w350.08
Stuart Russell6 weeks15d ago4w250.08
Toby Ord6 weeks15d ago4w230.08
Yoshua Bengio6 weeks15d ago4w250.08
AI Watch6 weeks10d ago5w350.08
Community Notes for Everything6 weeks7d ago5w500.08
AI Content Provenance Tracing6 weeks7d ago5w500.08
AI System Reliability Tracking6 weeks7d ago5w520.08
Timelines Wiki6 weeks10d ago5w350.08
Critical Insights6 weeks16d ago4w220.08
Anthropic Impact Assessment ModelQuarterly9d ago12w720.07
1Day Sooner6 weeks8d ago5w420.07
LessWrong6 weeks13d ago5w250.07
Lighthaven (Event Venue)6 weeks12d ago5w250.07
Red Queen Bio6 weeks8d ago5w420.07
Samotsvety6 weeks12d ago5w280.07
Geoffrey Hinton6 weeks15d ago4w210.07
Jaan Tallinn6 weeks8d ago5w420.07
Policy Effectiveness Assessment3 weeks2d ago3w780.07
AI-Assisted Rhetoric Highlighting6 weeks7d ago5w480.07
RoastMyPost6 weeks12d ago5w250.07
SquiggleAI6 weeks15d ago4w220.07
Anthropic Founder Pledges: Interventions to Increase Follow-ThroughQuarterly8d ago12w720.06
Sentinel (Catastrophic Risk Foresight)6 weeks12d ago5w220.06
Elon Musk: Track Record6 weeks12d ago5w220.06
Yann LeCun: Track Record6 weeks12d ago5w220.06
Yann LeCun6 weeks12d ago5w240.06
Org Watch6 weeks10d ago5w250.06
Wikipedia Views6 weeks10d ago5w250.06
Public Opinion Evolution ModelQuarterly11d ago11w420.05
Eliezer Yudkowsky6 weeks12d ago5w200.05
Leopold Aschenbrenner6 weeks11d ago5w220.05
Donations List Website6 weeks10d ago5w220.05
Scientific Research Capabilities3 weeks1d ago3w780.04
Compute & Hardware3 weeks1d ago3w780.04
Council on Strategic Risks6 weeks8d ago5w250.04
Architecture Scenarios TableMonthly3d ago4w300.03
Deployment Architectures TableMonthly3d ago4w300.03
Evaluation Types TableMonthly3d ago4w300.03
Goodfire3 weeks1d ago3w720.03
Safety Approaches TableMonthly3d ago4w350.03
Safety Generalizability TableMonthly3d ago4w300.03
Accident Risks TableMonthly3d ago4w350.03
Compute ConcentrationMonthly1d ago4w750.03
Projecting Compute SpendingMonthly1d ago4w720.02
AI-Assisted Knowledge ManagementMonthly1d ago4w450.02
Wikipedia and AI ContentMonthly1d ago4w550.02
AI Acceleration Tradeoff ModelQuarterly1d ago13w870.01
GrokipediaMonthly1d ago4w420.01
MAIM (Mutually Assured AI Malfunction)Bimonthly1d ago8w720.01
Agentic AI3 weeks0d ago3w770.00
OpenClaw Matplotlib Incident (2026)Monthly0d ago4w550.00
Sparse / MoE Transformers6 weeks0d ago6w620.00
Anthropic (Funder)6 weeks0d ago6w780.00
Anthropic IPO3 weeks0d ago3w750.00
Anthropic Valuation Analysis3 weeks0d ago3w780.00
AnthropicWeekly0d ago3d620.00
Longterm WikiWeekly0d ago7d120.00
X.com Platform EpistemicsMonthly0d ago4w550.00
Rogue AI Scenarios6 weeks0d ago6w780.00