Skip to content
Longterm Wiki

Timelines Wiki

active

Timelines Wiki is a specialized MediaWiki project documenting chronological histories of AI safety and EA organizations, created by Issa Rice with funding from Vipul Naik in 2017. While useful as a historical reference source, it primarily serves as documentation infrastructure rather than providing original analytical insight.

Organizations

3
Center for Applied RationalityBerkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-year follow-up. Received $3.5M+ from Open Philanthropy and $5M from FTX (later clawed back); faced major organizational challenges in the early 2020s.
Machine Intelligence Research Institute (MIRI)The Machine Intelligence Research Institute (MIRI) is one of the oldest organizations focused on AI existential risk, founded in 2000 as the Singularity Institute for Artificial Intelligence (SIAI).
LessWrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondents in 2014. Survey participation peaked at 3,000+ in 2016, declining to 558 by 2023, with the community increasingly focused on AI alignment discussions.

People

4
Vipul NaikVipul Naik is a mathematician and EA community member who has funded ~$255K in contract research (primarily to Sebastian Sanchez and Issa Rice) and created the Donations List Website tracking $72.8B in philanthropic donations. His main contribution is transparency infrastructure for EA funding patterns and donation tracking.
Issa RiceIssa Rice is an independent researcher who has created valuable knowledge infrastructure tools like Timelines Wiki and AI Watch for the EA and AI safety communities, though his work focuses on data aggregation rather than original research. His contributions are primarily utilitarian reference material rather than original analytical contributions to AI safety.
Eliezer YudkowskyEliezer Yudkowsky is one of the founding figures of AI safety as a field. In 2000, he co-founded the Machine Intelligence Research Institute (MIRI), originally called the Singularity Institute for Artificial Intelligence, making it one of the first organizations dedicated to studying the risks from advanced AI. His early writings on AI risk predated academic interest in the topic by over a decade. Yudkowsky's technical contributions include foundational work on decision theory, the formalization of Friendly AI concepts, and the identification of failure modes like deceptive alignment and the "sharp left turn." His 2022 essay "AGI Ruin: A List of Lethalities" provides a comprehensive catalog of why he believes aligning superintelligent AI is extremely difficult. He has been pessimistic about humanity's chances, arguing that current approaches to alignment are inadequate and that AI development should be slowed or halted. Beyond AI safety, Yudkowsky founded the "rationalist" community through his sequences of blog posts on human rationality, later compiled as "Rationality: From AI to Zombies." This community has been a major source of AI safety researchers and has shaped how the field thinks about reasoning under uncertainty. His writing style - blending technical concepts with accessible explanations and science fiction examples - has influenced how AI risk is communicated. Despite his pessimism, he remains an active voice advocating for taking AI risk seriously at the highest levels of government and industry.
Nick BostromSwedish philosopher, founder of FHI, author of 'Superintelligence' and 'Deep Utopia', known for existential risk frameworks and simulation hypothesis

Related Projects

3
AI WatchAI Watch is a tracking database by Issa Rice that monitors AI safety organizations, people, funding, and publications as part of his broader knowledge infrastructure ecosystem. The article provides useful context about Rice's systematic approach to documentation but lacks concrete details about AI Watch's actual scope, methodology, or current operational status.
Org WatchOrg Watch is a tracking website by Issa Rice that monitors EA and AI safety organizations, but the article lacks concrete information about its actual features, scope, or current status. The piece reads more like speculative analysis about what the tool might do rather than documentation of an established, operational product.
Donations List WebsiteComprehensive documentation of an open-source database tracking $72.8B in philanthropic donations (1969-2023) across 75+ donors, with particular coverage of EA/AI safety funding. The page thoroughly describes the tool's features, data coverage, and limitations, but is purely descriptive reference material about a data tool rather than analysis of AI safety funding patterns.

Related Wiki Pages

Top Related Pages

Concepts

Epistemic Tools Tools Overview

Organizations

LessWrong

Analysis

AI WatchOrg WatchDonations List Website

Other

Eliezer YudkowskyNick Bostrom

Clusters

ai-safety

Quick Links