Back
AISafety.info – AI Safety Resource Hub
webaisafety.info·aisafety.info/
AISafety.info is a community-maintained hub ideal for newcomers seeking structured overviews of AI safety concepts, and for practitioners looking for a curated index of key ideas and resources across technical and governance domains.
Metadata
Importance: 62/100homepage
Summary
AISafety.info is a community hub providing accessible introductions, explainers, and curated resources on AI safety topics. It serves as an entry point for those new to the field as well as a reference for practitioners, covering technical safety, alignment concepts, and related research areas.
Key Points
- •Aggregates and explains core AI safety concepts including alignment, interpretability, and existential risk in accessible language.
- •Serves as a community-maintained knowledge base connecting newcomers and researchers to key papers, concepts, and organizations.
- •Covers both technical topics (e.g., interpretability, causal scrubbing) and broader governance and coordination concerns.
- •Designed to lower the barrier to entry for people interested in contributing to or understanding AI safety work.
- •Regularly updated to reflect current research priorities and community discussions in the AI safety ecosystem.
Cited by 4 pages
| Page | Type | Quality |
|---|---|---|
| Conjecture | Organization | 37.0 |
| Longterm Wiki | Project | 63.0 |
| Stampy / AISafety.info | Project | 45.0 |
| Similar Projects to LongtermWiki: Research Report | -- | 64.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20263 KB
Take AISafety.info’s 3 minute survey to help inform our strategy and priorities [Take the survey](https://docs.google.com/forms/d/e/1FAIpQLSebvqMAyb1vUoP7gIEAJHOZE4HtNp6oixJm3taVos0AfwrBKg/viewform) # Smarter-than-human AI may come soon. ## It could lead to human extinction. ## Wait, AI will do what? [Click here](https://aisafety.info/questions/NM3T/) ## I need more arguments [Click here](https://aisafety.info/questions/NM3Q/I-need-more-arguments) [AI is advancing fast](https://aisafety.info/questions/NM37/AI-is-advancing-fast) [AI may pursue goals](https://aisafety.info/questions/NM3J/AI-may-pursue-goals) [AI can win a conflict against us](https://aisafety.info/questions/NM3O/AI-can-win-a-conflict-against-us) [Experts are highly concerned](https://aisafety.info/questions/NM3D/Experts-are-highly-concerned) ## Want to help with AI safety? [Get involved](https://aisafety.info/how-can-i-help) ## Questions? ## Ask Stampy, our chatbot, any questions about AI safety Try asking me... [Are AIs conscious?](https://aisafety.info/chat/?question=Are%20AIs%20conscious?) [What is an intelligence explosion?](https://aisafety.info/chat/?question=What%20is%20an%20intelligence%20explosion?) [What is the orthogonality thesis?](https://aisafety.info/chat/?question=What%20is%20the%20orthogonality%20thesis?) Stampy can be inaccurate. Always verify its sources. ### Advanced sections [Beyond the basics\\ \\ Dive deeper into the ideas](https://aisafety.info/questions/NM06/Beyond-the-basics) [Predictions about future AI\\ \\ How will advanced AI unfold?](https://aisafety.info/questions/MSJJ/Predictions-about-future-AI) [Alignment research\\ \\ Technical approaches to AI alignment](https://aisafety.info/questions/9J1L/Alignment-research) [AI governance\\ \\ Institutions and policies to manage AI](https://aisafety.info/questions/8AF4/AI-governance) * * * [AISafety.info](https://aisafety.info/) AISafety.info is a project founded by [Rob Miles](https://www.youtube.com/c/robertmilesai). The website is maintained by a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic. Get involved [Donate](https://www.every.org/stampy?utm_campaign=donate-link#/donate) [Code](https://github.com/StampyAI/stampy-ui) [Write](https://get_involved.aisafety.info/) [Join us on Discord](https://discord.gg/88TbjZnNyA) [About us](https://aisafety.info/questions/NLZQ/What-is-this-site-about) Partner projects [AISafety.com](https://www.aisafety.com/) [Alignment Ecosystem Development](https://alignment.dev/) [© AISafety.info, 2022—2026](https://coda.io/d/AI-Safety-Info-Dashboard_dfau7sl2hmG/Copyright_su79L#_luPMa) Aisafety.info is an [Ashgro Inc](https://www.ashgro.org/) Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.
Resource ID:
876bb3bfc6031642 | Stable ID: ZGY2YTgyMT