Cooperative AI Foundation
Safety OrganizationResearch foundation with $15M funding (from Macroscopic Ventures) studying how AI systems can learn to cooperate and how to design environments incentivizing cooperation. Co-founded by Allan Dafoe (now Director of Frontier Safety and Governance at Google DeepMind), Gillian Hadfield, and others. Published foundational "Open Problems in Cooperative AI" (2020). Key focus areas include multi-agent cooperation, preventing destructive AI competition, and human-AI coordination.
Related Wiki Pages
Top Related Pages
Cooperative AI
Cooperative AI research investigates how AI systems can cooperate effectively with humans and other AI systems.
Allan Dafoe
Autonomous Cooperative Agents
AI agents that act cooperatively on behalf of a principal — delegation of cooperation, multi-agent cooperation dynamics, and alignment implications
Multipolar Trap (AI Development)
Competitive dynamics where rational individual actions by AI developers create collectively catastrophic outcomes.
AI Governance Coordination Technologies
International Network of AI Safety Institutes (10+ nations, \$500M+ investment) achieves 85% chip tracking coverage while cryptographic verificatio...