Secure AI Project
Safety OrganizationPolicy advocacy organization founded ~2022-2023 by Nick Beckstead focusing on legislative requirements for AI safety protocols, whistleblower protections, and risk mitigation incentives. Rated highly by evaluators with confidential achievements at major AI lab; advocates mandatory safety/security protocols for frontier AI developers.
Related Wiki Pages
Top Related Pages
AI Alignment
Technical approaches to ensuring AI systems pursue intended goals and remain aligned with human values throughout training and deployment. Current ...
Voluntary AI Safety Commitments
Comprehensive analysis of AI labs' voluntary safety pledges, examining the effectiveness of industry self-regulation through White House commitment...
Nick Beckstead
American philosopher and longtermism researcher, known for his PhD dissertation on shaping the far future, former Program Officer at Open Philanthr...
Centre for Effective Altruism
Oxford-based organization that coordinates the effective altruism movement, running EA Global conferences, supporting local groups, and maintaining...
California SB 53
California's Transparency in Frontier Artificial Intelligence Act, the first U.S. state law regulating frontier AI models through transparency requ...