AI Watch
activeAI Watch is a tracking database by Issa Rice that monitors AI safety organizations, people, funding, and publications as part of his broader knowledge infrastructure ecosystem. The article provides useful context about Rice's systematic approach to documentation but lacks concrete details about AI Watch's actual scope, methodology, or current operational status.
Organizations
5| Center for Applied Rationality | Berkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-year follow-up. Received $3.5M+ from Open Philanthropy and $5M from FTX (later clawed back); faced major organizational challenges in the early 2020s. |
| Long-Term Future Fund (LTFF) | LTFF is a regranting program that has distributed $20M since 2017 (approximately $10M to AI safety) with median grants of $25K, filling a critical niche between personal savings and institutional funders like Coefficient Giving (median $257K). In 2023, LTFF granted $6.67M with a 19.3% acceptance rate, filling an important gap in the AI safety funding ecosystem. |
| Coefficient Giving | Coefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, with 68% going to evaluations/benchmarking, and launched a $40M Technical AI Safety RFP in 2025 covering 8 research areas. |
| Machine Intelligence Research Institute (MIRI) | The Machine Intelligence Research Institute (MIRI) is one of the oldest organizations focused on AI existential risk, founded in 2000 as the Singularity Institute for Artificial Intelligence (SIAI). |
| LessWrong | LessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondents in 2014. Survey participation peaked at 3,000+ in 2016, declining to 558 by 2023, with the community increasingly focused on AI alignment discussions. |
People
3| Issa Rice | Issa Rice is an independent researcher who has created valuable knowledge infrastructure tools like Timelines Wiki and AI Watch for the EA and AI safety communities, though his work focuses on data aggregation rather than original research. His contributions are primarily utilitarian reference material rather than original analytical contributions to AI safety. |
| Vipul Naik | Vipul Naik is a mathematician and EA community member who has funded ~$255K in contract research (primarily to Sebastian Sanchez and Issa Rice) and created the Donations List Website tracking $72.8B in philanthropic donations. His main contribution is transparency infrastructure for EA funding patterns and donation tracking. |
| Rohin Shah | Rohin Shah is a research scientist at Google DeepMind working on AI alignment. He previously wrote the influential Alignment Newsletter summarizing AI safety research. His work focuses on reward learning, value alignment, and understanding the alignment problem from both technical and conceptual perspectives. |
Related Projects
3| Org Watch | Org Watch is a tracking website by Issa Rice that monitors EA and AI safety organizations, but the article lacks concrete information about its actual features, scope, or current status. The piece reads more like speculative analysis about what the tool might do rather than documentation of an established, operational product. |
| Timelines Wiki | Timelines Wiki is a specialized MediaWiki project documenting chronological histories of AI safety and EA organizations, created by Issa Rice with funding from Vipul Naik in 2017. While useful as a historical reference source, it primarily serves as documentation infrastructure rather than providing original analytical insight. |
| Donations List Website | Comprehensive documentation of an open-source database tracking $72.8B in philanthropic donations (1969-2023) across 75+ donors, with particular coverage of EA/AI safety funding. The page thoroughly describes the tool's features, data coverage, and limitations, but is purely descriptive reference material about a data tool rather than analysis of AI safety funding patterns. |
Related Wiki Pages
Top Related Pages
Issa Rice
Independent researcher and prolific creator of knowledge infrastructure tools for the EA and AI safety communities, including Timelines Wiki, AI Wa...
Org Watch
A tracking website created by Issa Rice that monitors various organizations, primarily in the effective altruism and AI safety spaces, providing da...
AI Alignment
Technical approaches to ensuring AI systems pursue intended goals and remain aligned with human values throughout training and deployment. Current ...
Vipul Naik
Mathematician, data scientist, and EA funder who created public knowledge infrastructure including the Donations List Website and funded ≈$255K in ...
Timelines Wiki
A MediaWiki-based project documenting detailed historical timelines, particularly focused on AI safety organizations, effective altruism, and relat...