Longterm Wiki

Secure AI Project

secure-ai-project (E562)
← Back to pagePath: /knowledge-base/organizations/secure-ai-project/
Page Metadata
{
  "id": "secure-ai-project",
  "numericId": null,
  "path": "/knowledge-base/organizations/secure-ai-project/",
  "filePath": "knowledge-base/organizations/secure-ai-project.mdx",
  "title": "Secure AI Project",
  "quality": 47,
  "importance": 22,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-02-01",
  "llmSummary": "Policy advocacy organization founded ~2022-2023 by Nick Beckstead focusing on legislative requirements for AI safety protocols, whistleblower protections, and risk mitigation incentives. Rated highly by evaluators with confidential achievements at major AI lab; advocates mandatory safety/security protocols rather than voluntary commitments, funded exclusively by individual donors and nonprofits (no corporate funding).",
  "structuredSummary": null,
  "description": "Policy advocacy organization co-founded by Nick Beckstead focused on legislative approaches to AI safety and security standards",
  "ratings": {
    "focus": 7.2,
    "novelty": 1.5,
    "rigor": 4.8,
    "completeness": 6.5,
    "concreteness": 4.2,
    "actionability": 2
  },
  "category": "organizations",
  "subcategory": "safety-orgs",
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "metrics": {
    "wordCount": 1688,
    "tableCount": 2,
    "diagramCount": 0,
    "internalLinks": 14,
    "externalLinks": 16,
    "footnoteCount": 15,
    "bulletRatio": 0,
    "sectionCount": 11,
    "hasOverview": true,
    "structuralScore": 12
  },
  "suggestedQuality": 80,
  "updateFrequency": 21,
  "evergreen": true,
  "wordCount": 1688,
  "unconvertedLinks": [
    {
      "text": "California SB 1047 - Wikipedia",
      "url": "https://en.wikipedia.org/wiki/Safe_and_Secure_Innovation_for_Frontier_Artificial_Intelligence_Models_Act",
      "resourceId": "9607d725074dfe2e",
      "resourceTitle": "113+ current and former employees"
    }
  ],
  "unconvertedLinkCount": 1,
  "convertedLinkCount": 0,
  "backlinkCount": 0,
  "redundancy": {
    "maxSimilarity": 16,
    "similarPages": [
      {
        "id": "centre-for-long-term-resilience",
        "title": "Centre for Long-Term Resilience",
        "path": "/knowledge-base/organizations/centre-for-long-term-resilience/",
        "similarity": 16
      },
      {
        "id": "lionheart-ventures",
        "title": "Lionheart Ventures",
        "path": "/knowledge-base/organizations/lionheart-ventures/",
        "similarity": 16
      },
      {
        "id": "frontier-model-forum",
        "title": "Frontier Model Forum",
        "path": "/knowledge-base/organizations/frontier-model-forum/",
        "similarity": 15
      },
      {
        "id": "ai-futures-project",
        "title": "AI Futures Project",
        "path": "/knowledge-base/organizations/ai-futures-project/",
        "similarity": 14
      },
      {
        "id": "arb-research",
        "title": "Arb Research",
        "path": "/knowledge-base/organizations/arb-research/",
        "similarity": 14
      }
    ]
  }
}
Entity Data
{
  "id": "secure-ai-project",
  "type": "organization",
  "title": "Secure AI Project",
  "description": "Policy advocacy organization founded ~2022-2023 by Nick Beckstead focusing on legislative requirements for AI safety protocols, whistleblower protections, and risk mitigation incentives. Rated highly by evaluators with confidential achievements at major AI lab; advocates mandatory safety/security pr",
  "tags": [],
  "relatedEntries": [],
  "sources": [],
  "lastUpdated": "2026-02",
  "customFields": []
}
Canonical Facts (0)

No facts for this entity

External Links

No external links

Backlinks (0)

No backlinks

Frontmatter
{
  "title": "Secure AI Project",
  "description": "Policy advocacy organization co-founded by Nick Beckstead focused on legislative approaches to AI safety and security standards",
  "importance": 22.5,
  "lastEdited": "2026-02-01",
  "update_frequency": 21,
  "sidebar": {
    "order": 55
  },
  "ratings": {
    "focus": 7.2,
    "novelty": 1.5,
    "rigor": 4.8,
    "completeness": 6.5,
    "concreteness": 4.2,
    "actionability": 2
  },
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "quality": 47,
  "llmSummary": "Policy advocacy organization founded ~2022-2023 by Nick Beckstead focusing on legislative requirements for AI safety protocols, whistleblower protections, and risk mitigation incentives. Rated highly by evaluators with confidential achievements at major AI lab; advocates mandatory safety/security protocols rather than voluntary commitments, funded exclusively by individual donors and nonprofits (no corporate funding).",
  "subcategory": "safety-orgs",
  "entityType": "organization"
}
Raw MDX Source
---
title: Secure AI Project
description: Policy advocacy organization co-founded by Nick Beckstead focused on legislative approaches to AI safety and security standards
importance: 22.5
lastEdited: "2026-02-01"
update_frequency: 21
sidebar:
  order: 55
ratings:
  focus: 7.2
  novelty: 1.5
  rigor: 4.8
  completeness: 6.5
  concreteness: 4.2
  actionability: 2
clusters:
  - community
  - ai-safety
  - governance
quality: 47
llmSummary: Policy advocacy organization founded ~2022-2023 by Nick Beckstead focusing on legislative requirements for AI safety protocols, whistleblower protections, and risk mitigation incentives. Rated highly by evaluators with confidential achievements at major AI lab; advocates mandatory safety/security protocols rather than voluntary commitments, funded exclusively by individual donors and nonprofits (no corporate funding).
subcategory: safety-orgs
entityType: organization
---
import {EntityLink, KeyPeople, KeyQuestions, Section} from '@components/wiki';

## Quick Assessment

| Attribute | Assessment |
|-----------|------------|
| **Type** | Policy advocacy organization |
| **Founded** | ≈2022-2023 (exact date unspecified) |
| **Leadership** | Nick Beckstead (Co-Founder & CEO) |
| **Focus** | Legislative AI safety and security requirements |
| **Funding** | Individual donors and nonprofit institutions (no corporate funding) |
| **Key Approach** | Mandatory safety protocols, whistleblower protections, legal incentives |
| **Impact** | High-rated by evaluators; achieved confidential safety improvements at major AI lab |

## Key Links

| Source | Link |
|--------|------|
| Official Website | [secureaiproject.org](https://secureaiproject.org) |

## Overview

The **Secure AI Project** is a San Francisco-based organization that develops and advocates for pragmatic policies to reduce risks from advanced AI systems. Co-founded and led by Nick Beckstead as CEO, the organization distinguishes itself by focusing on legislative and regulatory interventions rather than purely <EntityLink id="E369">voluntary industry commitments</EntityLink>.[^1]

The organization operates on the premise that the AI ecosystem will be stronger and more secure if large AI developers are legally required to publish safety and security protocols, if whistleblowers are protected from retaliation, and if developers have clear incentives to mitigate risk in accordance with industry best practices.[^2] Rather than relying on voluntary compliance—which some major AI developers have already pledged—Secure AI Project pushes for these principles to be codified in state and federal law.

Secure AI Project explicitly does not accept corporate funding or funds from foreign governments, instead relying on individual donors and nonprofit institutions aligned with its mission.[^2] According to a 2025 nonprofit review by Zvi Mowshowitz, the organization has achieved "big wins" including enhancing safety practices at a major AI lab, with details remaining confidential.[^3]

## Nick Beckstead: Background and Path to Founding

Nick Beckstead brings extensive experience in AI safety, governance, and effective altruism philanthropy to his role as Secure AI Project's co-founder and CEO. Born in 1985, he earned a bachelor's degree in mathematics and philosophy from the University of Minnesota before completing a Ph.D. in Philosophy at Rutgers University.[^1] His doctoral dissertation made important early contributions to longtermism, focusing on existential risk, population ethics, space colonization, and differential progress.[^4]

As a graduate student, Beckstead co-founded the first US chapter of Giving What We Can, pledging to donate half of his post-tax income to cost-effective organizations fighting global poverty.[^4] This early commitment to effective altruism would shape his subsequent career trajectory.

Beckstead served as a Research Fellow at Oxford University's <EntityLink id="E140">Future of Humanity Institute</EntityLink> before joining <EntityLink id="E552">Coefficient Giving</EntityLink> as a Program Officer in 2014.[^1] At Coefficient Giving, he oversaw research and grantmaking related to global catastrophic risk reduction, with particular focus on advanced AI risks. His work included grants such as \$590,000 to the University of Tübingen for robustness research, \$11.35 million to the <EntityLink id="E57">Center for Human-Compatible AI</EntityLink> for organizational support, and \$265,000 to UC Santa Cruz for adversarial robustness research.[^5]

During his time at Coefficient Giving, Beckstead co-authored sections on <EntityLink id="E439">AI alignment</EntityLink> risks and noted ongoing challenges in finding qualified people to work on "the strategic aspect of potential risks from advanced AI," indicating significant funding availability but limited talent pools.[^6] He emphasized two key risk categories: alignment problems (misaligned powerful AI) and power concentration (bad actors gaining AI advantage).[^7]

After Coefficient Giving, Beckstead served as Policy Lead at the <EntityLink id="E47">Center for AI Safety</EntityLink> and became CEO of the Future Fund (part of the FTX Foundation) in November 2021.[^1] He resigned from the Future Fund in November 2022 when FTX collapsed.[^4] Following this, he co-founded the Secure AI Project, applying his experience in AI safety grantmaking, policy, and governance to legislative advocacy work.

## Mission and Policy Approach

Secure AI Project's core mission centers on three legislative priorities:

**1. Mandatory Safety and Security Protocols (SSPs):** The organization advocates for legal requirements that large AI developers must publish and implement protocols to assess, test, and mitigate severe risks from their systems. This goes beyond voluntary commitments by making such protocols legally enforceable.[^2]

**2. Whistleblower Protections:** Recognizing that internal voices may be critical for identifying safety issues, the organization pushes for legal protections against retaliation for those who report AI safety concerns.[^2]

**3. Risk Mitigation Incentives:** Rather than relying purely on compliance, Secure AI Project advocates for creating clear legal and economic incentives that reward developers for implementing industry best practices in risk mitigation.[^2]

The organization acknowledges that current AI systems offer substantial societal benefits while also creating risks—some well understood and others still being discovered as technology advances.[^2] This balanced perspective informs their pragmatic approach to policy advocacy.

The organization's work aligns with broader <EntityLink id="E608">AI governance</EntityLink> discussions about model weight security, access controls, and screening procedures to prevent bad actors from exploiting powerful AI systems.[^8] Their 2024 reports and scenario planning work have been highlighted as examples of high-quality strategic thinking in this space.[^3]

## Recent Developments and Impact

According to a 2025 nonprofit assessment, Secure AI Project was rated as deserving "high" funding priority, with evaluators expressing high confidence in the organization's continued leverage and impact.[^3] The assessment verified "big wins" including private improvements to safety practices at a major AI lab, though specific details remain confidential to protect ongoing relationships.

The organization's scenario planning work has been noted as particularly strong, with 2024 reports cited as evidence of quality strategic analysis.[^3] Evaluators praised the detail-oriented approach and results achieved by the team.

Secure AI Project's advocacy appears to have influenced legislative developments. California approved an AI safety law in 2025, effective January 1, 2026, that requires AI developers to implement safeguards—directly aligned with the organization's push for mandatory safety and security protocols, though no explicit connection has been confirmed in available sources.[^9]

The organization continues to pursue state and federal legislative advocacy while maintaining its independence through selective funding sources. Interested parties can contact the organization at info@secureaiproject.org for potential partnerships.[^2]

## Relationship to Broader AI Safety Ecosystem

Beckstead's career trajectory places Secure AI Project at the intersection of several key AI safety institutions. His prior roles at <EntityLink id="E552">Coefficient Giving</EntityLink>, the <EntityLink id="E47">Center for AI Safety</EntityLink>, and the <EntityLink id="E140">Future of Humanity Institute</EntityLink> connect the organization to major nodes in the AI safety research and funding landscape.[^1]

At Coefficient Giving, Beckstead co-authored views on <EntityLink id="E439">AI alignment</EntityLink> risks and loss of control scenarios, citing work by <EntityLink id="E215">Nick Bostrom</EntityLink> on potential large-scale harms from advanced AI.[^10] He advocated for "AI for AI safety" approaches to strengthen safety progress, risk evaluation, and capability restraint.[^7]

In public discussions, Beckstead has acknowledged challenges in mainstream AI researcher engagement with technical safety work, noting "Kuhnian barriers" in the machine learning field that favor empirical results over philosophical safety motivations.[^11] He described how work that doesn't resemble traditional ML papers can trigger "pseudoscience alarms" that hinder progress.[^11]

Within the effective altruism community, Beckstead has been featured positively on platforms like the EA Forum and in <EntityLink id="E510">80,000 Hours</EntityLink> podcasts discussing high-impact career paths in AI safety.[^4] He has recommended paths such as deep learning residencies (like the Google Brain Residency) over PhDs for quick industry entry into AI safety work.[^12]

## Criticisms and Concerns

While no direct criticisms of Secure AI Project itself appear in available sources, Beckstead's career has intersected with some controversial areas within effective altruism. In 2017, while serving as a trustee of the <EntityLink id="E517">Centre for Effective Altruism</EntityLink> (CEA), he approved Coefficient Giving grants to CEA, creating an unacknowledged conflict of interest; he later stepped down but rejoined in 2021.[^13]

More significantly, Beckstead's role as CEO of the FTX Future Fund ended with his resignation in November 2022 when FTX collapsed.[^4] While the collapse was due to FTX's broader financial misconduct rather than issues with the Future Fund's operations, the association represents a period of disruption in his career trajectory.

Some critics of effective altruism have questioned the movement's shift toward longtermism and existential risk focus, including AI risk, as a core priority.[^14] Beckstead's defense of AI and biosecurity funding as cost-effective drew scrutiny in 2018 discussions about whether such focus might harm EA's broader public image.[^11]

Additionally, during his time managing the Long-Term Future Fund, Beckstead made one small AI safety grant but conserved 96% of available funds, drawing indirect critique in AI safety funding reviews.[^15]

## Funding and Operations

Secure AI Project is funded exclusively by individual donors and nonprofit institutions aligned with its mission.[^2] The organization explicitly states it does not accept corporate funding or funds from foreign governments—a policy designed to maintain independence in its advocacy work.

Evaluators have recommended the organization for substantial donations, with large grants directed through tyler@lasst.org and smaller donations available through designated links.[^3] No specific funding amounts have been publicly disclosed.

This selective funding approach contrasts with Beckstead's prior work at Coefficient Giving, where he oversaw grants totaling millions of dollars, including a £13.4 million (≈\$16.2 million USD) portfolio for AI risks, biosecurity, and macrostrategy with partial allocation of \$12 million by September 2019.[^5]

## Key Uncertainties

Several important aspects of Secure AI Project remain uncertain or undisclosed:

**Founding details:** The exact founding date and full list of co-founders beyond Nick Beckstead are not specified in available sources.[^1]

**Team composition:** Beyond Beckstead as CEO, the organization's staff size, key personnel, and board members are not publicly detailed.[^2]

**Specific achievements:** While evaluators cite "big wins" including safety improvements at a major AI lab, these details remain confidential.[^3]

**Legislative success:** The extent to which the organization's advocacy directly influenced specific legislation, such as California's 2025 AI safety law, has not been explicitly confirmed.[^9]

**Future strategy:** How the organization will scale its efforts and whether it will expand beyond California and federal advocacy remains unclear.

**Measurement:** The organization has not published public impact assessments or metrics for evaluating policy influence.

## Sources

[^1]: [Nick Beckstead personal website](https://nickbeckstead.com/)
[^2]: [Secure AI Project website - About](https://secureaiproject.org/)
[^3]: [Nick Beckstead - EA Forum Topic](https://forum.effectivealtruism.org/topics/nick-beckstead)
[^4]: [Nick Beckstead EA Forum Profile](https://forum.effectivealtruism.org/users/nick_beckstead)
[^5]: [Redwood Research Grant - Coefficient Giving](https://www.openphilanthropy.org/grants/redwood-research-general-support-2023/)
[^6]: [Coefficient Giving Progress 2017](https://www.openphilanthropy.org/research/our-progress-in-2017-and-plans-for-2018/)
[^7]: [Coefficient Giving AI Alignment RFP](https://www.openphilanthropy.org/request-for-proposals-for-projects-in-ai-alignment-that-work-with-deep-learning-systems/)
[^8]: [Coefficient Giving Technical AI Safety RFP - EA Forum](https://forum.effectivealtruism.org/posts/XtgDaunRKtCPzyCWg/open-philanthropy-technical-ai-safety-rfp-usd40m-available)
[^9]: [California SB 1047 - Wikipedia](https://en.wikipedia.org/wiki/Safe_and_Secure_Innovation_for_Frontier_Artificial_Intelligence_Models_Act)
[^10]: [Research directions Open Phil wants to fund - Alignment Forum](https://www.alignmentforum.org/posts/26SHhxK2yYQbh7ors/research-directions-open-phil-wants-to-fund-in-technical-ai)
[^11]: [Nick Beckstead Fireside Chat 2018 - EA Global](https://www.effectivealtruism.org/articles/ea-global-2018-beckstead-fireside-chat)
[^12]: [Nick Beckstead podcast - 80,000 Hours](https://80000hours.org/podcast/episodes/nick-beckstead-giving-billions/)
[^13]: [Nick Beckstead leaving EV boards - EA Forum](https://forum.effectivealtruism.org/posts/Defu3jkejb7pmLjeN/nick-beckstead-is-leaving-the-effective-ventures-boards)
[^14]: [Nick Beckstead on EA community building - EA Forum](https://forum.effectivealtruism.org/posts/xwDG64qAjGjQeXwfd/nick-beckstead-ea-community-building)
[^15]: [Kerry Vaughan, Nick Beckstead, and Lewis Bollard: EA Funds - EA Forum](https://forum.effectivealtruism.org/posts/mCSmr6LNNQD4AKj4z/kerry-vaughan-nick-beckstead-and-lewis-bollard-ea-funds)