Skip to content
Longterm Wiki

Resources

External resources (papers, articles, reports) cited across the wiki. Resources are indexed from citations; each entry tracks credibility, enrichment status, and linking pages.

Resources
21638
Webs
20410
Papers
465
Governments
336
Blogs
241
Peer-Reviewed Venues
11
With Summaries
21580
Cited by Pages
4356
Enriched
21636

Enrichment Pipeline

21,636 / 21,638 enriched
Classified2(0%)Enriched21,636(100%)

Top Domains

1,662 unique domains
www.lesswrong.com11,705
forum.effectivealtruism.org4,879
arxiv.org344
en.wikipedia.org155
www.anthropic.com112
www.wikidata.org101
openai.com95
www.rand.org59
www.nature.com53
www.openphilanthropy.org50
www.alignmentforum.org49
www.nist.gov40
fortune.com39
techcrunch.com38
www.cnbc.com37
github.com37
intelligence.org32
www.gov.uk32
www.brookings.edu32
www.macfound.org29
21,638 resources
NIST AI Risk Management FrameworkgovernmentNIST5/540-
Anthropic - AI Safety Company HomepagewebAnthropic4/538-
Anthropic's Work on AI SafetypaperAnthropic4/536-
METR: Model Evaluation and Threat ResearchwebMETR4/533-
Future of Humanity InstitutewebFuture of Humanity Institute4/529-
Partnership on AI (PAI) – Multi-Stakeholder AI Governance OrganizationwebPartnership on AI3/527-
Center for AI Safety (CAIS) – HomepagewebCenter for AI Safety4/527-
UK AI Safety Institute (AISI)governmentUK AI Safety Institute4/527-
FLI AI Safety Index Summer 2025webFuture of Life Institute3/526-
OpenAI Official HomepagewebOpenAI4/524-
CSET: AI Market DynamicswebCSET Georgetown4/521-
EU AI Act – Official Resource Hubweb--20-
RAND Provides Objective Research Services and Public Policy AnalysiswebRAND Corporation4/519-
OpenAI Preparedness FrameworkwebOpenAI4/519-
RAND: AI and National SecuritywebRAND Corporation4/519-
AISI Frontier AI TrendsgovernmentUK AI Safety Institute4/518-
International AI Safety Report 2025web--18-
Stanford HAI: AI Companions and Mental HealthwebStanford HAI4/518-
Risks from Learned OptimizationpaperarXiv3/517-
GovAI helps decision-makers navigate the transition to a world with advanced AI, by producing rigorous research and fostering talent." name="description"/><meta content="GovAI | HomegovernmentCentre for the Governance of AI4/517-
Redwood Research: AI Controlweb--16-
Machine Intelligence Research InstitutewebMIRI3/516-
Epoch AI - AI Research and Forecasting OrganizationwebEpoch AI4/515-
Apollo Research - AI Safety Evaluation OrganizationwebApollo Research4/515-
OpenAI: Model BehaviorpaperOpenAI4/515-
Google DeepMind Official HomepagewebGoogle DeepMind4/514-
Center for a New American Security (CNAS) - HomepagewebCNAS4/514-
Anthropic's 2024 alignment faking studywebAnthropic4/514-
C2PA Explainer Videosweb--14-
Alignment Research Centerweb--13-
OpenAI Safety UpdateswebOpenAI4/513-
Frontier Models are Capable of In-Context SchemingwebApollo Research4/513-
Metaculus Forecasting PlatformwebMetaculus3/513-
Responsible Scaling PolicywebAnthropic4/512-
Anthropic's follow-up research on defection probeswebAnthropic4/512-
AI Safety Institute - GOV.UKgovernmentUK Government4/512-
Center for Human-Compatible AIweb--12-
European approach to artificial intelligencewebEuropean Union4/511-
More capable models scheme at higher rateswebApollo Research4/511-
AI Safety Index Winter 2025webFuture of Life Institute3/511-
Stanford AI Index 2025webStanford HAI4/511-
Constitutional AI: Harmlessness from AI FeedbackpaperAnthropic4/511-
AI Alignment ForumblogAlignment Forum3/510-
AI experts show significant disagreementwebAI Impacts3/510-
Biden Administration AI Executive Order 14110governmentWhite House4/510-
Future of Life InstitutewebFuture of Life Institute3/510-
Anthropic: "Discovering Sycophancy in Language Models"paperarXiv3/510-
Anthropic: Recommended Directions for AI Safety ResearchwebAnthropic Alignment4/510-
METR's analysis of 12 companieswebMETR4/510-
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety TrainingpaperarXiv3/510-
Rows per page:
Page 1 of 433