| AI-Enabled Untraceable Misuse | Monthly | 365d ago | 49w overdue | 55 | 8.03 |
| Corporate AI Safety Responses | Weekly | 16d ago | 9d overdue | 78 | 1.78 |
| Compute Monitoring | 3 weeks | 47d ago | 26d overdue | 78 | 1.75 |
| AI Alignment Research Agenda Comparison | 3 weeks | 47d ago | 26d overdue | 78 | 1.75 |
| Dario Amodei | Weekly | 51d ago | 6w overdue | 23 | 1.68 |
| AI Misuse Risk Cruxes | Weekly | 14d ago | 7d overdue | 82 | 1.64 |
| AI Chip Export Controls | 3 weeks | 47d ago | 26d overdue | 72 | 1.61 |
| International Compute Regimes | 3 weeks | 47d ago | 26d overdue | 72 | 1.61 |
| Seoul AI Safety Summit Declaration | 3 weeks | 47d ago | 26d overdue | 68 | 1.52 |
| Anthropic Core Views | 3 weeks | 47d ago | 26d overdue | 67 | 1.50 |
| California SB 53 | Weekly | 12d ago | 5d overdue | 85 | 1.46 |
| Geopolitics & Coordination | Weekly | 15d ago | 8d overdue | 67 | 1.44 |
| US Executive Order on Safe, Secure, and Trustworthy AI | Weekly | 14d ago | 7d overdue | 72 | 1.44 |
| Autonomous Weapons | 3 weeks | 47d ago | 26d overdue | 64 | 1.43 |
| AI Risk Public Education | 3 weeks | 48d ago | 27d overdue | 62 | 1.42 |
| UK AI Safety Institute | 3 weeks | 47d ago | 26d overdue | 62 | 1.39 |
| EU AI Act | Weekly | 12d ago | 5d overdue | 78 | 1.34 |
| Disinformation | 3 weeks | 51d ago | 30d overdue | 54 | 1.31 |
| ARC (Alignment Research Center) | 3 weeks | 51d ago | 30d overdue | 53 | 1.29 |
| Safe Superintelligence Inc (SSI) | Weekly | 12d ago | 5d overdue | 75 | 1.29 |
| New York RAISE Act | Weekly | 12d ago | 5d overdue | 75 | 1.29 |
| California SB 1047 | Weekly | 12d ago | 5d overdue | 74 | 1.27 |
| Deepfakes | 3 weeks | 51d ago | 30d overdue | 52 | 1.26 |
| Google DeepMind | Weekly | 15d ago | 12d overdue | 25 | 1.25 |
| Council of Europe Framework Convention on Artificial Intelligence | Weekly | 12d ago | 5d overdue | 65 | 1.11 |
| OpenAI Foundation | Weekly | 9d ago | 2d overdue | 85 | 1.09 |
| OpenAI Foundation Governance Paradox | Weekly | 9d ago | 2d overdue | 82 | 1.05 |
| CAIS (Center for AI Safety) | 3 weeks | 51d ago | 30d overdue | 42 | 1.02 |
| AI-Powered Fraud | 3 weeks | 51d ago | 30d overdue | 42 | 1.02 |
| Musk v. OpenAI Lawsuit | Weekly | 9d ago | 2d overdue | 78 | 1.00 |
| GovAI | 3 weeks | 47d ago | 26d overdue | 42 | 0.94 |
| CHAI (Center for Human-Compatible AI) | 3 weeks | 51d ago | 30d overdue | 38 | 0.92 |
| Microsoft AI | Weekly | 15d ago | 8d overdue | 42 | 0.90 |
| AI Safety Solution Cruxes | 6 weeks | 47d ago | 2d overdue | 82 | 0.86 |
| Frontier AI Company Comparison (2026) | Weekly | 9d ago | 2d overdue | 65 | 0.84 |
| AI Safety Institutes | 6 weeks | 47d ago | 2d overdue | 78 | 0.81 |
| AI Structural Risk Cruxes | 6 weeks | 47d ago | 2d overdue | 74 | 0.77 |
| Texas TRAIGA Responsible AI Governance Act | Weekly | 12d ago | 5d overdue | 45 | 0.77 |
| AI Flash Dynamics | 6 weeks | 47d ago | 2d overdue | 74 | 0.77 |
| AI-Powered Consensus Manufacturing | 6 weeks | 47d ago | 2d overdue | 73 | 0.76 |
| Corporate Influence on AI Policy | 6 weeks | 47d ago | 2d overdue | 72 | 0.75 |
| AI-Era Epistemic Security | 6 weeks | 47d ago | 2d overdue | 72 | 0.75 |
| Open Source AI Safety | 6 weeks | 47d ago | 2d overdue | 72 | 0.75 |
| AI Standards Bodies | 6 weeks | 47d ago | 2d overdue | 72 | 0.75 |
| AI-Driven Institutional Decision Capture | 6 weeks | 47d ago | 2d overdue | 72 | 0.75 |
| AI-Induced Irreversibility | 6 weeks | 47d ago | 2d overdue | 72 | 0.75 |
| AI Value Lock-in | 6 weeks | 47d ago | 2d overdue | 72 | 0.75 |
| AI Winner-Take-All Dynamics | 6 weeks | 51d ago | 6d overdue | 64 | 0.73 |
| AI Content Authentication | 6 weeks | 47d ago | 2d overdue | 64 | 0.67 |
| AI-Era Epistemic Infrastructure | 6 weeks | 47d ago | 2d overdue | 64 | 0.67 |
| Is Interpretability Sufficient for Safety? | 6 weeks | 47d ago | 2d overdue | 62 | 0.65 |
| AI Control | 3 weeks | 16d ago | 5d | 85 | 0.65 |
| AI-Augmented Forecasting | 6 weeks | 48d ago | 3d overdue | 61 | 0.65 |
| AI-Assisted Deliberation Platforms | 6 weeks | 47d ago | 2d overdue | 62 | 0.65 |
| Epistemic Sycophancy | 6 weeks | 47d ago | 2d overdue | 62 | 0.65 |
| Evals & Red-teaming | 3 weeks | 16d ago | 5d | 82 | 0.62 |
| AI Evaluation | 3 weeks | 16d ago | 5d | 82 | 0.62 |
| Dangerous Capability Evaluations | 3 weeks | 15d ago | 6d | 84 | 0.60 |
| Long-Horizon Autonomous Tasks | 3 weeks | 15d ago | 6d | 82 | 0.59 |
| Self-Improvement and Recursive Enhancement | 3 weeks | 15d ago | 6d | 82 | 0.59 |
| Situational Awareness | 3 weeks | 15d ago | 6d | 83 | 0.59 |
| METR | 3 weeks | 15d ago | 6d | 82 | 0.59 |
| AI-Assisted Alignment | 3 weeks | 15d ago | 6d | 82 | 0.59 |
| AI Alignment | 3 weeks | 14d ago | 7d | 88 | 0.59 |
| AI Governance and Policy | 3 weeks | 15d ago | 6d | 82 | 0.59 |
| Mechanistic Interpretability | 3 weeks | 15d ago | 6d | 82 | 0.59 |
| Pause / Moratorium | 3 weeks | 16d ago | 5d | 78 | 0.59 |
| Responsible Scaling Policies | 3 weeks | 15d ago | 6d | 82 | 0.59 |
| Sleeper Agent Detection | 3 weeks | 15d ago | 6d | 82 | 0.59 |
| Technical AI Safety Research | 3 weeks | 15d ago | 6d | 82 | 0.59 |
| AI Knowledge Monopoly | 6 weeks | 51d ago | 6d overdue | 52 | 0.59 |
| Autonomous Coding | 3 weeks | 15d ago | 6d | 81 | 0.58 |
| AI Safety Intervention Portfolio | 3 weeks | 14d ago | 7d | 87 | 0.58 |
| Model Registries | 3 weeks | 16d ago | 5d | 75 | 0.57 |
| Scheming & Deception Detection | 3 weeks | 14d ago | 7d | 85 | 0.57 |
| Persuasion and Social Manipulation | 3 weeks | 15d ago | 6d | 78 | 0.56 |
| AI Capabilities Metrics | 3 weeks | 15d ago | 6d | 78 | 0.56 |
| Alignment Evaluations | 3 weeks | 15d ago | 6d | 78 | 0.56 |
| Circuit Breakers / Inference Interventions | 3 weeks | 16d ago | 5d | 74 | 0.56 |
| Evals-Based Deployment Gates | 3 weeks | 15d ago | 6d | 78 | 0.56 |
| Hardware-Enabled Governance | 3 weeks | 15d ago | 6d | 78 | 0.56 |
| Mechanistic Interpretability | 3 weeks | 15d ago | 6d | 78 | 0.56 |
| Red Teaming | 3 weeks | 16d ago | 5d | 74 | 0.56 |
| Responsible Scaling Policies | 3 weeks | 15d ago | 6d | 78 | 0.56 |
| Tool-Use Restrictions | 3 weeks | 15d ago | 6d | 78 | 0.56 |
| Voluntary Industry Commitments | 3 weeks | 15d ago | 6d | 78 | 0.56 |
| Large Language Models | 3 weeks | 14d ago | 7d | 82 | 0.55 |
| Reasoning and Planning | 3 weeks | 14d ago | 7d | 82 | 0.55 |
| Alignment Progress | 3 weeks | 14d ago | 7d | 82 | 0.55 |
| Capability Elicitation | 3 weeks | 14d ago | 7d | 82 | 0.55 |
| Representation Engineering | 3 weeks | 16d ago | 5d | 72 | 0.55 |
| AI Safety Cases | 3 weeks | 14d ago | 7d | 82 | 0.55 |
| Bioweapons | 3 weeks | 14d ago | 7d | 82 | 0.55 |
| International Coordination Mechanisms | 3 weeks | 15d ago | 6d | 75 | 0.54 |
| Third-Party Model Auditing | 3 weeks | 15d ago | 6d | 75 | 0.54 |
| Multi-Agent Safety | 3 weeks | 15d ago | 6d | 76 | 0.54 |
| Epistemic Learned Helplessness | 6 weeks | 42d ago | 3d | 58 | 0.54 |
| AGI Timeline | 3 weeks | 15d ago | 6d | 74 | 0.53 |
| Tool Use and Computer Use | 3 weeks | 14d ago | 7d | 78 | 0.52 |
| Sandboxing / Containment | 3 weeks | 14d ago | 7d | 78 | 0.52 |
| Structured Access / API-Only | 3 weeks | 14d ago | 7d | 78 | 0.52 |
| Compute Thresholds | 3 weeks | 14d ago | 7d | 78 | 0.52 |
| Large Language Models | 3 weeks | 15d ago | 6d | 72 | 0.51 |
| Heavy Scaffolding / Agentic Systems | 3 weeks | 15d ago | 6d | 72 | 0.51 |
| Expert Opinion | 3 weeks | 15d ago | 6d | 71 | 0.51 |
| International AI Safety Summits | 3 weeks | 15d ago | 6d | 72 | 0.51 |
| AI Trust Cascade Failure | 6 weeks | 51d ago | 6d overdue | 45 | 0.51 |
| Dense Transformers | 3 weeks | 14d ago | 7d | 73 | 0.49 |
| Leading the Future super PAC | 3 weeks | 12d ago | 9d | 85 | 0.49 |
| AI Output Filtering | 3 weeks | 16d ago | 5d | 64 | 0.49 |
| Open vs Closed Source AI | 3 weeks | 15d ago | 6d | 67 | 0.48 |
| Safety Research & Resources | 3 weeks | 14d ago | 7d | 72 | 0.48 |
| Sparse Autoencoders (SAEs) | 3 weeks | 14d ago | 7d | 72 | 0.48 |
| AI-Induced Cyber Psychosis | 6 weeks | 51d ago | 6d overdue | 42 | 0.48 |
| Cyberweapons | 3 weeks | 14d ago | 7d | 72 | 0.48 |
| Historical Revisionism | 6 weeks | 51d ago | 6d overdue | 42 | 0.48 |
| AI-Driven Legal Evidence Crisis | 6 weeks | 51d ago | 6d overdue | 42 | 0.48 |
| Mass Surveillance | 3 weeks | 15d ago | 6d | 67 | 0.48 |
| AGI Development | 3 weeks | 16d ago | 5d | 62 | 0.47 |
| Light Scaffolding | 3 weeks | 16d ago | 5d | 62 | 0.47 |
| World Models + Planning | 3 weeks | 16d ago | 5d | 62 | 0.47 |
| Probing / Linear Probes | 3 weeks | 16d ago | 5d | 62 | 0.47 |
| Defense in Depth Model | Quarterly | 49d ago | 6w | 82 | 0.45 |
| Risk Activation Timeline Model | Quarterly | 49d ago | 6w | 82 | 0.45 |
| US AI Safety Institute | 3 weeks | 14d ago | 7d | 68 | 0.45 |
| Government Regulation vs Industry Self-Governance | 3 weeks | 15d ago | 6d | 62 | 0.44 |
| Pause and Redirect - The Deliberate Path | 3 weeks | 15d ago | 6d | 62 | 0.44 |
| Apollo Research | 3 weeks | 15d ago | 6d | 62 | 0.44 |
| AI Labor Transition & Economic Resilience | 6 weeks | 47d ago | 2d overdue | 42 | 0.44 |
| AI Model Specifications | 3 weeks | 15d ago | 6d | 61 | 0.44 |
| Authoritarian Tools | 3 weeks | 15d ago | 6d | 62 | 0.44 |
| AI Risk Portfolio Analysis | Quarterly | 47d ago | 6w | 82 | 0.43 |
| Capabilities-to-Safety Pipeline Model | Quarterly | 48d ago | 6w | 81 | 0.43 |
| Capability-Alignment Race Model | Quarterly | 47d ago | 6w | 82 | 0.43 |
| Capability Threshold Model | Quarterly | 47d ago | 6w | 82 | 0.43 |
| Intervention Timing Windows | Quarterly | 47d ago | 6w | 82 | 0.43 |
| Racing Dynamics Impact Model | Quarterly | 50d ago | 6w | 78 | 0.43 |
| ControlAI | 3 weeks | 12d ago | 9d | 75 | 0.43 |
| Frontier Model Forum | 3 weeks | 12d ago | 9d | 75 | 0.43 |
| Meta AI (FAIR) | Weekly | 2d ago | 1d | 65 | 0.43 |
| NIST and AI Safety | 3 weeks | 12d ago | 9d | 75 | 0.43 |
| OpenAI | Weekly | 2d ago | 1d | 65 | 0.43 |
| Palisade Research | 3 weeks | 12d ago | 9d | 75 | 0.43 |
| David Sacks (White House AI Czar) | 3 weeks | 12d ago | 9d | 75 | 0.43 |
| Demis Hassabis | 6 weeks | 47d ago | 2d overdue | 41 | 0.43 |
| Marc Andreessen (AI Investor) | 3 weeks | 12d ago | 9d | 75 | 0.43 |
| Bletchley Declaration | 3 weeks | 12d ago | 9d | 75 | 0.43 |
| Compute Governance: AI Chips Export Controls Policy | 3 weeks | 12d ago | 9d | 75 | 0.43 |
| AI-Bioweapons Timeline Model | Quarterly | 49d ago | 6w | 78 | 0.42 |
| Autonomous Cyber Attack Timeline | Quarterly | 48d ago | 6w | 78 | 0.42 |
| Instrumental Convergence Framework | Quarterly | 49d ago | 6w | 78 | 0.42 |
| Risk Cascade Pathways | Quarterly | 49d ago | 6w | 78 | 0.42 |
| Risk Interaction Network | Quarterly | 48d ago | 6w | 78 | 0.42 |
| Safety Research Allocation Model | Quarterly | 48d ago | 6w | 78 | 0.42 |
| AI Safety Talent Supply/Demand Gap Model | Quarterly | 48d ago | 6w | 78 | 0.42 |
| Warning Signs Model | Quarterly | 49d ago | 6w | 78 | 0.42 |
| Worldview-Intervention Mapping | Quarterly | 49d ago | 6w | 78 | 0.42 |
| Autonomous Weapons Escalation Model | Quarterly | 48d ago | 6w | 77 | 0.41 |
| Bioweapons Attack Chain Model | Quarterly | 49d ago | 6w | 76 | 0.41 |
| Multipolar Trap Dynamics Model | Quarterly | 49d ago | 6w | 76 | 0.41 |
| AI Proliferation Risk Model | Quarterly | 49d ago | 6w | 76 | 0.41 |
| Expected Value of AI Safety Research | Quarterly | 49d ago | 6w | 75 | 0.41 |
| Epoch AI | 3 weeks | 14d ago | 7d | 62 | 0.41 |
| Flash Dynamics Threshold Model | Quarterly | 49d ago | 6w | 73 | 0.40 |
| Risk Interaction Matrix Model | Quarterly | 47d ago | 6w | 76 | 0.40 |
| Redwood Research | 3 weeks | 13d ago | 8d | 65 | 0.40 |
| Scalable Oversight | Quarterly | 47d ago | 6w | 77 | 0.40 |
| Authentication Collapse Timeline Model | Quarterly | 49d ago | 6w | 72 | 0.39 |
| Authoritarian Tools Diffusion Model | Quarterly | 49d ago | 6w | 72 | 0.39 |
| AI Risk Critical Uncertainties Model | Quarterly | 47d ago | 6w | 74 | 0.39 |
| Cyber Offense-Defense Balance Model | Quarterly | 49d ago | 6w | 72 | 0.39 |
| Deceptive Alignment Decomposition Model | Quarterly | 49d ago | 6w | 72 | 0.39 |
| Goal Misgeneralization Probability Model | Quarterly | 49d ago | 6w | 72 | 0.39 |
| Irreversibility Threshold Model | Quarterly | 49d ago | 6w | 72 | 0.39 |
| Mesa-Optimization Risk Analysis | Quarterly | 49d ago | 6w | 71 | 0.39 |
| Winner-Take-All Concentration Model | Quarterly | 49d ago | 6w | 72 | 0.39 |
| International AI Coordination Game | Quarterly | 48d ago | 6w | 72 | 0.38 |
| Multi-Actor Strategic Landscape | Quarterly | 47d ago | 6w | 72 | 0.38 |
| Regulatory Capacity Threshold Model | Quarterly | 46d ago | 6w | 74 | 0.38 |
| Reward Hacking Taxonomy and Severity Model | Quarterly | 47d ago | 6w | 72 | 0.38 |
| AI Surveillance and Regime Durability Model | Quarterly | 47d ago | 6w | 72 | 0.38 |
| Technical Pathway Decomposition | Quarterly | 47d ago | 6w | 72 | 0.38 |
| Whistleblower Dynamics Model | Quarterly | 48d ago | 6w | 72 | 0.38 |
| Sam Altman | Weekly | 12d ago | 5d overdue | 22 | 0.38 |
| Claude Code Espionage Incident (2025) | 3 weeks | 12d ago | 9d | 65 | 0.37 |
| LAWS Proliferation Model | Quarterly | 49d ago | 6w | 68 | 0.37 |
| Parameter Interaction Network | Quarterly | 46d ago | 6w | 72 | 0.37 |
| Sycophancy Feedback Loop Model | Quarterly | 50d ago | 6w | 67 | 0.37 |
| MATS ML Alignment Theory Scholars program | 3 weeks | 12d ago | 9d | 65 | 0.37 |
| Why Alignment Might Be Easy | Quarterly | 47d ago | 6w | 68 | 0.36 |
| Automation Bias Cascade Model | Quarterly | 48d ago | 6w | 68 | 0.36 |
| Compounding Risks Analysis | Quarterly | 49d ago | 6w | 67 | 0.36 |
| Elon Musk (AI Industry) | Weekly | 12d ago | 5d overdue | 21 | 0.36 |
| Epistemic Collapse Threshold Model | Quarterly | 49d ago | 6w | 62 | 0.34 |
| Expertise Atrophy Cascade Model | Quarterly | 49d ago | 6w | 62 | 0.34 |
| Agent Foundations | Quarterly | 47d ago | 6w | 65 | 0.34 |
| Surveillance Chilling Effects Model | Quarterly | 48d ago | 6w | 62 | 0.33 |
| Why Alignment Might Be Hard | Quarterly | 47d ago | 6w | 62 | 0.32 |
| Economic & Labor Metrics | 3 weeks | 16d ago | 5d | 42 | 0.32 |
| 80,000 Hours | 3 weeks | 15d ago | 6d | 42 | 0.30 |
| xAI | Weekly | 2d ago | 1d | 45 | 0.30 |
| Deceptive Alignment | 6 weeks | 16d ago | 4w | 85 | 0.30 |
| Biological / Organoid Computing | 3 weeks | 16d ago | 5d | 38 | 0.29 |
| Expertise Atrophy Progression Model | Quarterly | 50d ago | 6w | 52 | 0.29 |
| Media-Policy Feedback Loop Model | Quarterly | 48d ago | 6w | 54 | 0.29 |
| Power-Seeking AI | 6 weeks | 15d ago | 4w | 87 | 0.29 |
| AI Development Racing Dynamics | 6 weeks | 16d ago | 4w | 82 | 0.29 |
| Treacherous Turn | 6 weeks | 15d ago | 4w | 87 | 0.29 |
| The Case AGAINST AI Existential Risk | Quarterly | 47d ago | 6w | 54 | 0.28 |
| Post-Incident Recovery Model | Quarterly | 48d ago | 6w | 52 | 0.28 |
| RLHF / Constitutional AI | 6 weeks | 15d ago | 4w | 83 | 0.28 |
| AI Safety Training Programs | 6 weeks | 16d ago | 4w | 75 | 0.27 |
| Corrigibility Failure | 6 weeks | 15d ago | 4w | 82 | 0.27 |
| Instrumental Convergence | 6 weeks | 15d ago | 4w | 82 | 0.27 |
| Mesa-Optimization | 6 weeks | 15d ago | 4w | 82 | 0.27 |
| AI Capability Sandbagging | 6 weeks | 15d ago | 4w | 82 | 0.27 |
| Scheming | 6 weeks | 15d ago | 4w | 82 | 0.27 |
| Sharp Left Turn | 6 weeks | 15d ago | 4w | 81 | 0.27 |
| AI Epistemic Cruxes | 6 weeks | 16d ago | 4w | 72 | 0.26 |
| Provable / Guaranteed Safe AI | 6 weeks | 16d ago | 4w | 72 | 0.26 |
| Disinformation Detection Arms Race Model | Quarterly | 48d ago | 6w | 48 | 0.26 |
| Global Partnership on Artificial Intelligence (GPAI) | 3 weeks | 12d ago | 9d | 45 | 0.26 |
| Seldon Lab | 3 weeks | 12d ago | 9d | 45 | 0.26 |
| Capability Unlearning / Removal | 6 weeks | 16d ago | 4w | 72 | 0.26 |
| Constitutional AI | 6 weeks | 16d ago | 4w | 72 | 0.26 |
| Eval Saturation & The Evals Gap | 3 weeks | 6d ago | 2w | 90 | 0.26 |
| Evaluation Awareness | 3 weeks | 6d ago | 2w | 90 | 0.26 |
| AI Lab Safety Culture | 6 weeks | 15d ago | 4w | 78 | 0.26 |
| Process Supervision | 6 weeks | 16d ago | 4w | 72 | 0.26 |
| Refusal Training | 6 weeks | 16d ago | 4w | 72 | 0.26 |
| AI-Enabled Authoritarian Takeover | 6 weeks | 15d ago | 4w | 78 | 0.26 |
| AI-Driven Concentration of Power | 6 weeks | 16d ago | 4w | 72 | 0.26 |
| Emergent Capabilities | 6 weeks | 15d ago | 4w | 78 | 0.26 |
| Goal Misgeneralization | 6 weeks | 15d ago | 4w | 78 | 0.26 |
| Multipolar Trap (AI Development) | 6 weeks | 14d ago | 4w | 82 | 0.26 |
| Proliferation | 6 weeks | 15d ago | 4w | 78 | 0.26 |
| Deep Learning Revolution (2012-2020) | Quarterly | 51d ago | 6w | 44 | 0.25 |
| Deepfakes Authentication Crisis Model | Quarterly | 48d ago | 6w | 47 | 0.25 |
| Nick Bostrom | 6 weeks | 51d ago | 6d overdue | 22 | 0.25 |
| Scalable Eval Approaches | 3 weeks | 6d ago | 2w | 88 | 0.25 |
| Reward Hacking | 6 weeks | 14d ago | 4w | 79 | 0.25 |
| AI Accident Risk Cruxes | 6 weeks | 14d ago | 4w | 78 | 0.24 |
| Misaligned Catastrophe - The Bad Ending | 6 weeks | 15d ago | 4w | 72 | 0.24 |
| Slow Takeoff Muddle - Muddling Through | 6 weeks | 15d ago | 4w | 72 | 0.24 |
| Epistemic & Forecasting Organizations | 6 weeks | 15d ago | 4w | 72 | 0.24 |
| Future of Humanity Institute (FHI) | 3 weeks | 15d ago | 6d | 34 | 0.24 |
| Pause AI | 3 weeks | 12d ago | 9d | 42 | 0.24 |
| Biosecurity Interventions | 3 weeks | 7d ago | 14d | 72 | 0.24 |
| AI Governance Coordination Technologies | 6 weeks | 14d ago | 4w | 77 | 0.24 |
| Is EA Biosecurity Work Limited to Restricting LLM Biological Use? | 3 weeks | 7d ago | 14d | 72 | 0.24 |
| AI Safety Field Building Analysis | 6 weeks | 15d ago | 4w | 72 | 0.24 |
| Pause Advocacy | 6 weeks | 14d ago | 4w | 78 | 0.24 |
| Preference Optimization Methods | 6 weeks | 15d ago | 4w | 72 | 0.24 |
| Weak-to-Strong Generalization | 6 weeks | 14d ago | 4w | 77 | 0.24 |
| AI Whistleblower Protections | 6 weeks | 15d ago | 4w | 72 | 0.24 |
| Governance-Focused Worldview | 6 weeks | 15d ago | 4w | 72 | 0.24 |
| MIRI (Machine Intelligence Research Institute) | 3 weeks | 13d ago | 8d | 37 | 0.23 |
| Chris Olah | 6 weeks | 51d ago | 6d overdue | 20 | 0.23 |
| Paul Christiano | 6 weeks | 42d ago | 3d | 25 | 0.23 |
| Sleeper Agents: Training Deceptive LLMs | 6 weeks | 12d ago | 5w | 85 | 0.23 |
| Mainstream Era (2020-Present) | Quarterly | 47d ago | 6w | 42 | 0.22 |
| Collective Intelligence / Coordination | 6 weeks | 16d ago | 4w | 62 | 0.22 |
| Neuro-Symbolic Hybrid Systems | 6 weeks | 16d ago | 4w | 62 | 0.22 |
| Novel / Unknown Approaches | 6 weeks | 16d ago | 4w | 62 | 0.22 |
| Fraud Sophistication Curve Model | Quarterly | 48d ago | 6w | 42 | 0.22 |
| Adversarial Training | 6 weeks | 16d ago | 4w | 62 | 0.22 |
| AI-Human Hybrid Systems | 6 weeks | 14d ago | 4w | 72 | 0.22 |
| Prediction Markets (AI Forecasting) | 6 weeks | 16d ago | 4w | 62 | 0.22 |
| Reward Modeling | 6 weeks | 16d ago | 4w | 62 | 0.22 |
| AI Distributional Shift | 6 weeks | 14d ago | 4w | 72 | 0.22 |
| AI Preference Manipulation | 6 weeks | 16d ago | 4w | 62 | 0.22 |
| Sycophancy | 6 weeks | 16d ago | 4w | 62 | 0.22 |
| Multipolar Competition - The Fragmented World | 6 weeks | 15d ago | 4w | 62 | 0.21 |
| Lab Behavior & Industry | Weekly | 2d ago | 5d | 72 | 0.21 |
| LongtermWiki Impact Model | 3 weeks | 9d ago | 12d | 50 | 0.21 |
| Coefficient Giving | 6 weeks | 15d ago | 4w | 62 | 0.21 |
| Founders Fund | 3 weeks | 10d ago | 11d | 45 | 0.21 |
| Lionheart Ventures | 3 weeks | 10d ago | 11d | 45 | 0.21 |
| Value Aligned Research Advisors | 3 weeks | 11d ago | 10d | 40 | 0.21 |
| China AI Regulations | Weekly | 2d ago | 5d | 72 | 0.21 |
| Failed and Stalled AI Policy Proposals | Weekly | 2d ago | 5d | 72 | 0.21 |
| ForecastBench | 6 weeks | 15d ago | 4w | 62 | 0.21 |
| US State AI Legislation | Weekly | 2d ago | 5d | 72 | 0.21 |
| XPT (Existential Risk Persuasion Tournament) | 6 weeks | 15d ago | 4w | 64 | 0.21 |
| Authentication Collapse | 6 weeks | 15d ago | 4w | 62 | 0.21 |
| AI-Induced Expertise Atrophy | 6 weeks | 16d ago | 4w | 58 | 0.21 |
| AI-Driven Trust Decline | 6 weeks | 15d ago | 4w | 62 | 0.21 |
| Early Warnings (1950s-2000) | Quarterly | 52d ago | 5w | 35 | 0.20 |
| Conjecture | 3 weeks | 15d ago | 6d | 28 | 0.20 |
| FAR AI | 3 weeks | 13d ago | 8d | 32 | 0.20 |
| Philip Tetlock (Forecasting Pioneer) | 6 weeks | 12d ago | 5w | 75 | 0.20 |
| AI Welfare and Digital Minds | 6 weeks | 12d ago | 5w | 75 | 0.20 |
| AI-Induced Enfeeblement | 6 weeks | 14d ago | 4w | 64 | 0.20 |
| When Will AGI Arrive? | 6 weeks | 16d ago | 4w | 54 | 0.19 |
| State-Space Models / Mamba | 6 weeks | 16d ago | 4w | 54 | 0.19 |
| Centre for Effective Altruism | 6 weeks | 13d ago | 5w | 65 | 0.19 |
| Venture Capital | 3 weeks | 10d ago | 11d | 40 | 0.19 |
| Eli Lifland | 6 weeks | 12d ago | 5w | 72 | 0.19 |
| Automation Bias (AI Systems) | 6 weeks | 15d ago | 4w | 58 | 0.19 |
| Epistemic Collapse | 6 weeks | 13d ago | 5w | 65 | 0.19 |
| Scientific Knowledge Corruption | 6 weeks | 14d ago | 4w | 62 | 0.19 |
| Long-Timelines Technical Worldview | 6 weeks | 14d ago | 4w | 62 | 0.19 |
| Optimistic Alignment Worldview | 6 weeks | 14d ago | 4w | 62 | 0.19 |
| Public Opinion & Awareness | 6 weeks | 16d ago | 4w | 52 | 0.18 |
| Meta & Structural Indicators | 6 weeks | 16d ago | 4w | 52 | 0.18 |
| Forecasting Research Institute | 6 weeks | 15d ago | 4w | 54 | 0.18 |
| Ilya Sutskever | 3 weeks | 15d ago | 6d | 25 | 0.18 |
| Max Tegmark | 6 weeks | 11d ago | 5w | 75 | 0.18 |
| Colorado AI Act (SB 205) | Weekly | 2d ago | 5d | 62 | 0.18 |
| NIST AI Risk Management Framework | Weekly | 2d ago | 5d | 62 | 0.18 |
| AI Model Steganography | 6 weeks | 14d ago | 4w | 58 | 0.18 |
| Brain-Computer Interfaces | 3 weeks | 16d ago | 5d | 22 | 0.17 |
| Longtermist Funders | 6 weeks | 10d ago | 5w | 75 | 0.17 |
| Long-Term Benefit Trust (Anthropic) | 6 weeks | 10d ago | 5w | 78 | 0.17 |
| Long-Term Future Fund (LTFF) | 6 weeks | 15d ago | 4w | 52 | 0.17 |
| Robin Hanson | 6 weeks | 12d ago | 5w | 65 | 0.17 |
| Deepfake Detection | 6 weeks | 14d ago | 4w | 54 | 0.17 |
| The MIRI Era (2000-2015) | Quarterly | 51d ago | 6w | 28 | 0.16 |
| AI Futures Project | 3 weeks | 12d ago | 9d | 28 | 0.16 |
| AI Impacts | 6 weeks | 11d ago | 5w | 65 | 0.16 |
| AI Revenue Sources | 3 weeks | 4d ago | 2w | 82 | 0.16 |
| CSET (Center for Security and Emerging Technology) | 3 weeks | 15d ago | 6d | 23 | 0.16 |
| Elon Musk (Funder) | 6 weeks | 10d ago | 5w | 70 | 0.16 |
| Dustin Moskovitz (AI Safety Funder) | 6 weeks | 10d ago | 5w | 70 | 0.16 |
| Jan Leike | 3 weeks | 15d ago | 6d | 22 | 0.16 |
| Erosion of Human Agency | 6 weeks | 14d ago | 4w | 52 | 0.16 |
| Aligned AGI - The Good Ending | 6 weeks | 14d ago | 4w | 48 | 0.15 |
| Minimal Scaffolding | 6 weeks | 16d ago | 4w | 42 | 0.15 |
| Neuromorphic Hardware | 6 weeks | 16d ago | 4w | 42 | 0.15 |
| Carlsmith's Six-Premise Argument | Quarterly | 16d ago | 11w | 82 | 0.15 |
| Survival and Flourishing Fund (SFF) | 6 weeks | 15d ago | 4w | 44 | 0.15 |
| MIT AI Risk Repository | 6 weeks | 11d ago | 5w | 60 | 0.15 |
| AI-Accelerated Reality Fragmentation | 6 weeks | 13d ago | 5w | 52 | 0.15 |
| Parameter Table | 6 weeks | 16d ago | 4w | 42 | 0.15 |
| The Case FOR AI Existential Risk | Quarterly | 15d ago | 11w | 87 | 0.14 |
| Is Scaling All You Need? | 6 weeks | 15d ago | 4w | 42 | 0.14 |
| Corrigibility Failure Pathways | Quarterly | 16d ago | 11w | 76 | 0.14 |
| Institutional Adaptation Speed Model | Quarterly | 16d ago | 11w | 78 | 0.14 |
| Power-Seeking Emergence Conditions Model | Quarterly | 16d ago | 11w | 78 | 0.14 |
| Epoch AI | 6 weeks | 15d ago | 4w | 42 | 0.14 |
| Future of Life Institute (FLI) | 6 weeks | 15d ago | 4w | 42 | 0.14 |
| Lightning Rod Labs | 3 weeks | 12d ago | 9d | 25 | 0.14 |
| Metaculus | 6 weeks | 15d ago | 4w | 42 | 0.14 |
| Peter Thiel (Funder) | 6 weeks | 10d ago | 5w | 65 | 0.14 |
| Schmidt Futures | 6 weeks | 10d ago | 5w | 65 | 0.14 |
| SecureBio | 6 weeks | 13d ago | 5w | 48 | 0.14 |
| The Sequences by Eliezer Yudkowsky | 6 weeks | 13d ago | 5w | 50 | 0.14 |
| AI Forecasting Benchmark Tournament | 6 weeks | 15d ago | 4w | 42 | 0.14 |
| AI-Driven Economic Disruption | 6 weeks | 15d ago | 4w | 43 | 0.14 |
| Should We Pause AI Development? | 6 weeks | 14d ago | 4w | 42 | 0.13 |
| Alignment Robustness Trajectory | Quarterly | 16d ago | 11w | 72 | 0.13 |
| Feedback Loop & Cascade Model | Quarterly | 16d ago | 11w | 72 | 0.13 |
| Safety Culture Equilibrium | Quarterly | 16d ago | 11w | 72 | 0.13 |
| Trust Cascade Failure Model | Quarterly | 16d ago | 11w | 71 | 0.13 |
| Anthropic Pre-IPO DAF Transfers | 6 weeks | 8d ago | 5w | 72 | 0.13 |
| Centre for Long-Term Resilience | 6 weeks | 8d ago | 5w | 72 | 0.13 |
| CSER (Centre for the Study of Existential Risk) | 3 weeks | 12d ago | 9d | 22 | 0.13 |
| Johns Hopkins Center for Health Security | 6 weeks | 8d ago | 5w | 75 | 0.13 |
| Secure AI Project | 3 weeks | 12d ago | 9d | 22 | 0.13 |
| X Community Notes | 6 weeks | 14d ago | 4w | 42 | 0.13 |
| Corrigibility Research | Quarterly | 15d ago | 11w | 78 | 0.13 |
| AI Safety via Debate | Quarterly | 16d ago | 11w | 72 | 0.13 |
| Formal Verification (AI Safety) | Quarterly | 16d ago | 11w | 72 | 0.13 |
| Goal Misgeneralization Research | Quarterly | 16d ago | 11w | 72 | 0.13 |
| Provably Safe AI (davidad agenda) | Quarterly | 16d ago | 11w | 72 | 0.13 |
| Squiggle | 6 weeks | 15d ago | 4w | 38 | 0.13 |
| Arb Research | 6 weeks | 12d ago | 5w | 45 | 0.12 |
| Biosecurity Organizations | 6 weeks | 8d ago | 5w | 70 | 0.12 |
| FutureSearch | 6 weeks | 12d ago | 5w | 45 | 0.12 |
| Good Judgment (Forecasting) | 6 weeks | 12d ago | 5w | 45 | 0.12 |
| Longview Philanthropy | 6 weeks | 15d ago | 4w | 37 | 0.12 |
| Manifund | 6 weeks | 15d ago | 4w | 36 | 0.12 |
| Vitalik Buterin (Funder) | 6 weeks | 10d ago | 5w | 55 | 0.12 |
| Nuño Sempere | 6 weeks | 12d ago | 5w | 45 | 0.12 |
| Vidur Kapur | 6 weeks | 12d ago | 5w | 45 | 0.12 |
| AI for Human Reasoning Fellowship | 6 weeks | 12d ago | 5w | 45 | 0.12 |
| Canada AIDA | Weekly | 2d ago | 5d | 42 | 0.12 |
| Eliciting Latent Knowledge (ELK) | Quarterly | 14d ago | 11w | 77 | 0.12 |
| Stampy / AISafety.info | 6 weeks | 11d ago | 5w | 50 | 0.12 |
| AI Doomer Worldview | 6 weeks | 15d ago | 4w | 35 | 0.12 |
| Electoral Impact Assessment Model | Quarterly | 16d ago | 11w | 62 | 0.11 |
| Intervention Effectiveness Matrix | Quarterly | 11d ago | 11w | 87 | 0.11 |
| Societal Response & Adaptation Model | Quarterly | 16d ago | 11w | 62 | 0.11 |
| Cooperative IRL (CIRL) | Quarterly | 16d ago | 11w | 62 | 0.11 |
| Cooperative AI | Quarterly | 16d ago | 11w | 62 | 0.11 |
| AI Uplift Assessment Model | Quarterly | 11d ago | 11w | 78 | 0.10 |
| Model Organisms of Misalignment | Quarterly | 12d ago | 11w | 72 | 0.10 |
| Safety-Capability Tradeoff Model | Quarterly | 11d ago | 11w | 78 | 0.10 |
| Scheming Likelihood Assessment | Quarterly | 11d ago | 11w | 78 | 0.10 |
| Short Timeline Policy Implications | Quarterly | 11d ago | 11w | 78 | 0.10 |
| Trust Erosion Dynamics Model | Quarterly | 16d ago | 11w | 56 | 0.10 |
| Giving Pledge | 6 weeks | 10d ago | 5w | 45 | 0.10 |
| William and Flora Hewlett Foundation | 6 weeks | 10d ago | 5w | 45 | 0.10 |
| NTI | bio (Nuclear Threat Initiative - Biological Program) | 6 weeks | 8d ago | 5w | 55 | 0.10 |
| Is AI Existential Risk Real? | 6 weeks | 16d ago | 4w | 25 | 0.09 |
| Whole Brain Emulation | 6 weeks | 16d ago | 4w | 24 | 0.09 |
| Manifest (Forecasting Conference) | 6 weeks | 11d ago | 5w | 35 | 0.09 |
| QURI (Quantified Uncertainty Research Institute) | 6 weeks | 15d ago | 4w | 28 | 0.09 |
| SecureDNA | 6 weeks | 8d ago | 5w | 50 | 0.09 |
| Swift Centre | 6 weeks | 12d ago | 5w | 35 | 0.09 |
| Design Sketches for Collective Epistemics | 6 weeks | 7d ago | 5w | 55 | 0.09 |
| Epistemic Virtue Evals | 6 weeks | 7d ago | 5w | 55 | 0.09 |
| Blueprint Biosecurity | 6 weeks | 8d ago | 5w | 44 | 0.08 |
| Chan Zuckerberg Initiative | 6 weeks | 10d ago | 5w | 35 | 0.08 |
| Coalition for Epidemic Preparedness Innovations | 6 weeks | 8d ago | 5w | 45 | 0.08 |
| EA Global | 6 weeks | 10d ago | 5w | 35 | 0.08 |
| IBBIS (International Biosecurity and Biosafety Initiative for Science) | 6 weeks | 8d ago | 5w | 45 | 0.08 |
| Holden Karnofsky | 6 weeks | 15d ago | 4w | 25 | 0.08 |
| Issa Rice | 6 weeks | 10d ago | 5w | 35 | 0.08 |
| Stuart Russell | 6 weeks | 15d ago | 4w | 25 | 0.08 |
| Toby Ord | 6 weeks | 15d ago | 4w | 23 | 0.08 |
| Yoshua Bengio | 6 weeks | 15d ago | 4w | 25 | 0.08 |
| AI Watch | 6 weeks | 10d ago | 5w | 35 | 0.08 |
| Community Notes for Everything | 6 weeks | 7d ago | 5w | 50 | 0.08 |
| AI Content Provenance Tracing | 6 weeks | 7d ago | 5w | 50 | 0.08 |
| AI System Reliability Tracking | 6 weeks | 7d ago | 5w | 52 | 0.08 |
| Timelines Wiki | 6 weeks | 10d ago | 5w | 35 | 0.08 |
| Critical Insights | 6 weeks | 16d ago | 4w | 22 | 0.08 |
| Anthropic Impact Assessment Model | Quarterly | 9d ago | 12w | 72 | 0.07 |
| 1Day Sooner | 6 weeks | 8d ago | 5w | 42 | 0.07 |
| LessWrong | 6 weeks | 13d ago | 5w | 25 | 0.07 |
| Lighthaven (Event Venue) | 6 weeks | 12d ago | 5w | 25 | 0.07 |
| Red Queen Bio | 6 weeks | 8d ago | 5w | 42 | 0.07 |
| Samotsvety | 6 weeks | 12d ago | 5w | 28 | 0.07 |
| Geoffrey Hinton | 6 weeks | 15d ago | 4w | 21 | 0.07 |
| Jaan Tallinn | 6 weeks | 8d ago | 5w | 42 | 0.07 |
| Policy Effectiveness Assessment | 3 weeks | 2d ago | 3w | 78 | 0.07 |
| AI-Assisted Rhetoric Highlighting | 6 weeks | 7d ago | 5w | 48 | 0.07 |
| RoastMyPost | 6 weeks | 12d ago | 5w | 25 | 0.07 |
| SquiggleAI | 6 weeks | 15d ago | 4w | 22 | 0.07 |
| Anthropic Founder Pledges: Interventions to Increase Follow-Through | Quarterly | 8d ago | 12w | 72 | 0.06 |
| Sentinel (Catastrophic Risk Foresight) | 6 weeks | 12d ago | 5w | 22 | 0.06 |
| Elon Musk: Track Record | 6 weeks | 12d ago | 5w | 22 | 0.06 |
| Yann LeCun: Track Record | 6 weeks | 12d ago | 5w | 22 | 0.06 |
| Yann LeCun | 6 weeks | 12d ago | 5w | 24 | 0.06 |
| Org Watch | 6 weeks | 10d ago | 5w | 25 | 0.06 |
| Wikipedia Views | 6 weeks | 10d ago | 5w | 25 | 0.06 |
| Public Opinion Evolution Model | Quarterly | 11d ago | 11w | 42 | 0.05 |
| Eliezer Yudkowsky | 6 weeks | 12d ago | 5w | 20 | 0.05 |
| Leopold Aschenbrenner | 6 weeks | 11d ago | 5w | 22 | 0.05 |
| Donations List Website | 6 weeks | 10d ago | 5w | 22 | 0.05 |
| Scientific Research Capabilities | 3 weeks | 1d ago | 3w | 78 | 0.04 |
| Compute & Hardware | 3 weeks | 1d ago | 3w | 78 | 0.04 |
| Council on Strategic Risks | 6 weeks | 8d ago | 5w | 25 | 0.04 |
| Architecture Scenarios Table | Monthly | 3d ago | 4w | 30 | 0.03 |
| Deployment Architectures Table | Monthly | 3d ago | 4w | 30 | 0.03 |
| Evaluation Types Table | Monthly | 3d ago | 4w | 30 | 0.03 |
| Goodfire | 3 weeks | 1d ago | 3w | 72 | 0.03 |
| Safety Approaches Table | Monthly | 3d ago | 4w | 35 | 0.03 |
| Safety Generalizability Table | Monthly | 3d ago | 4w | 30 | 0.03 |
| Accident Risks Table | Monthly | 3d ago | 4w | 35 | 0.03 |
| Compute Concentration | Monthly | 1d ago | 4w | 75 | 0.03 |
| Projecting Compute Spending | Monthly | 1d ago | 4w | 72 | 0.02 |
| AI-Assisted Knowledge Management | Monthly | 1d ago | 4w | 45 | 0.02 |
| Wikipedia and AI Content | Monthly | 1d ago | 4w | 55 | 0.02 |
| AI Acceleration Tradeoff Model | Quarterly | 1d ago | 13w | 87 | 0.01 |
| Grokipedia | Monthly | 1d ago | 4w | 42 | 0.01 |
| MAIM (Mutually Assured AI Malfunction) | Bimonthly | 1d ago | 8w | 72 | 0.01 |
| Agentic AI | 3 weeks | 0d ago | 3w | 77 | 0.00 |
| OpenClaw Matplotlib Incident (2026) | Monthly | 0d ago | 4w | 55 | 0.00 |
| Sparse / MoE Transformers | 6 weeks | 0d ago | 6w | 62 | 0.00 |
| Anthropic (Funder) | 6 weeks | 0d ago | 6w | 78 | 0.00 |
| Anthropic IPO | 3 weeks | 0d ago | 3w | 75 | 0.00 |
| Anthropic Valuation Analysis | 3 weeks | 0d ago | 3w | 78 | 0.00 |
| Anthropic | Weekly | 0d ago | 3d | 62 | 0.00 |
| Longterm Wiki | Weekly | 0d ago | 7d | 12 | 0.00 |
| X.com Platform Epistemics | Monthly | 0d ago | 4w | 55 | 0.00 |
| Rogue AI Scenarios | 6 weeks | 0d ago | 6w | 78 | 0.00 |