MIT AI Risk Repository
activeThe MIT AI Risk Repository catalogs 1,700+ AI risks from 65+ frameworks into a searchable database with dual taxonomies (causal and domain-based). Updated quarterly since August 2024, it provides the first comprehensive public catalog of AI risks but is limited by framework extraction methodology and incomplete coverage of some risk domains.
Organizations
1| Massachusetts Institute of Technology | Private research university in Cambridge, Massachusetts. A leading center for AI and machine learning research. |
Related Projects
1| Stampy / AISafety.info | AISafety.info is a volunteer-maintained wiki with 280+ answers on AI existential risk, complemented by Stampy, an LLM chatbot searching 10K-100K alignment documents via RAG. Features include a Discord bot bridging YouTube comments, PageRank-style karma voting for answer quality control, and the Distillation of AI safety arguments into accessible formats. |
Related Wiki Pages
Top Related Pages
Goal Misgeneralization
Goal misgeneralization occurs when AI systems learn capabilities that transfer to new situations but pursue wrong objectives in deployment.
AI Disinformation
AI enables disinformation campaigns at unprecedented scale and sophistication, transforming propaganda operations through automated content generat...
EU AI Act
The world's first comprehensive AI regulation, adopting a risk-based approach to regulate foundation models and general-purpose AI systems
Autonomous Weapons
Lethal autonomous weapons systems (LAWS) represent one of the most immediate and concerning applications of AI in military contexts.
Automation Bias (AI Systems)
The tendency to over-trust AI systems and accept their outputs without appropriate scrutiny.