Skip to content
Longterm Wiki

Stampy / AISafety.info

active

AISafety.info is a volunteer-maintained wiki with 280+ answers on AI existential risk, complemented by Stampy, an LLM chatbot searching 10K-100K alignment documents via RAG. Features include a Discord bot bridging YouTube comments, PageRank-style karma voting for answer quality control, and the Distillation of AI safety arguments into accessible formats.

Organizations

6
AnthropicAnthropic is an AI safety company founded in January 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei. The company was created following disagreements with OpenAI's direction, particularly concerns about the pace of commercialization and the shift toward Microsoft partnership.
OpenAIOpenAI is the AI research company that brought large language models into mainstream consciousness through ChatGPT. Founded in December 2015 as a non-profit with the mission to ensure artificial general intelligence benefits all of humanity, OpenAI has undergone dramatic evolution - from non-profit to "capped-profit," from research lab to produc...
Machine Intelligence Research Institute (MIRI)The Machine Intelligence Research Institute (MIRI) is one of the oldest organizations focused on AI existential risk, founded in 2000 as the Singularity Institute for Artificial Intelligence (SIAI).
ManifundManifund is a $2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors ($50K-400K budgets), fiscal sponsorship, and experimental impact certificates. The platform distributed $2.06M in 2023 (~40% to AI safety research), with a growing focus on AI safety evaluations and field-building.
LessWrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondents in 2014. Survey participation peaked at 3,000+ in 2016, declining to 558 by 2023, with the community increasingly focused on AI alignment discussions.
Google DeepMindGoogle DeepMind was formed in April 2023 from the merger of DeepMind and Google Brain, uniting Google's two major AI research organizations. The combined entity represents one of the world's most formidable AI research labs, with landmark achievements including AlphaGo (defeating world champions at Go), AlphaFold (solving protein folding), and G...

People

2
Gwern BranwenComprehensive biographical profile of pseudonymous researcher Gwern Branwen, documenting his early advocacy of AI scaling laws (predicting AGI by 2030), extensive self-experimentation work, and influence within rationalist/EA communities. While well-sourced with 47 citations, the page functions as reference material rather than advancing novel arguments about AI safety.
Eliezer YudkowskyEliezer Yudkowsky is one of the founding figures of AI safety as a field. In 2000, he co-founded the Machine Intelligence Research Institute (MIRI), originally called the Singularity Institute for Artificial Intelligence, making it one of the first organizations dedicated to studying the risks from advanced AI. His early writings on AI risk predated academic interest in the topic by over a decade. Yudkowsky's technical contributions include foundational work on decision theory, the formalization of Friendly AI concepts, and the identification of failure modes like deceptive alignment and the "sharp left turn." His 2022 essay "AGI Ruin: A List of Lethalities" provides a comprehensive catalog of why he believes aligning superintelligent AI is extremely difficult. He has been pessimistic about humanity's chances, arguing that current approaches to alignment are inadequate and that AI development should be slowed or halted. Beyond AI safety, Yudkowsky founded the "rationalist" community through his sequences of blog posts on human rationality, later compiled as "Rationality: From AI to Zombies." This community has been a major source of AI safety researchers and has shaped how the field thinks about reasoning under uncertainty. His writing style - blending technical concepts with accessible explanations and science fiction examples - has influenced how AI risk is communicated. Despite his pessimism, he remains an active voice advocating for taking AI risk seriously at the highest levels of government and industry.

Related Projects

2
GrokipediaxAI's AI-generated encyclopedia launched October 2025, growing to 6M+ articles with documented quality concerns including political bias and scientific inaccuracies.
MIT AI Risk RepositoryThe MIT AI Risk Repository catalogs 1,700+ AI risks from 65+ frameworks into a searchable database with dual taxonomies (causal and domain-based). Updated quarterly since August 2024, it provides the first comprehensive public catalog of AI risks but is limited by framework extraction methodology and incomplete coverage of some risk domains.

Related Wiki Pages

Top Related Pages

Organizations

Machine Intelligence Research InstituteManifundLessWrongGoogle DeepMind

Concepts

Epistemic Tools Tools OverviewSimilar Projects

Analysis

GrokipediaMIT AI Risk Repository

Other

Eliezer YudkowskyGPT-4GPT-4oGemini 2.5 ProClaudeClaude 3.7 Sonnet

Clusters

epistemicscommunityai-safety

Quick Links