SecureBio
securebio (E563)← Back to pagePath: /knowledge-base/organizations/securebio/
Page Metadata
{
"id": "securebio",
"numericId": null,
"path": "/knowledge-base/organizations/securebio/",
"filePath": "knowledge-base/organizations/securebio.mdx",
"title": "SecureBio",
"quality": 65,
"importance": 48,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2026-01-31",
"llmSummary": null,
"structuredSummary": null,
"description": "A biosecurity nonprofit applying the Delay/Detect/Defend framework to protect against catastrophic pandemics, including AI-enabled biological threats, through wastewater surveillance (Nucleic Acid Observatory) and AI capability evaluations (Virology Capabilities Test). Co-founded by Kevin Esvelt, who also co-founded the legally separate SecureDNA synthesis screening initiative.",
"ratings": {
"novelty": 7,
"rigor": 6,
"actionability": 7,
"completeness": 5
},
"category": "organizations",
"subcategory": "biosecurity-orgs",
"clusters": [
"biorisks",
"community",
"ai-safety",
"governance"
],
"metrics": {
"wordCount": 1431,
"tableCount": 3,
"diagramCount": 0,
"internalLinks": 12,
"externalLinks": 14,
"footnoteCount": 14,
"bulletRatio": 0.1,
"sectionCount": 27,
"hasOverview": true,
"structuralScore": 13
},
"suggestedQuality": 87,
"updateFrequency": 45,
"evergreen": true,
"wordCount": 1431,
"unconvertedLinks": [
{
"text": "SecureBio Official Website",
"url": "https://securebio.org/",
"resourceId": "81e8568b008e4245",
"resourceTitle": "SecureBio organization"
}
],
"unconvertedLinkCount": 1,
"convertedLinkCount": 0,
"backlinkCount": 1,
"redundancy": {
"maxSimilarity": 18,
"similarPages": [
{
"id": "ea-biosecurity-scope",
"title": "Is EA Biosecurity Work Limited to Restricting LLM Biological Use?",
"path": "/knowledge-base/responses/ea-biosecurity-scope/",
"similarity": 18
},
{
"id": "blueprint-biosecurity",
"title": "Blueprint Biosecurity",
"path": "/knowledge-base/organizations/blueprint-biosecurity/",
"similarity": 15
},
{
"id": "centre-for-long-term-resilience",
"title": "Centre for Long-Term Resilience",
"path": "/knowledge-base/organizations/centre-for-long-term-resilience/",
"similarity": 14
},
{
"id": "1day-sooner",
"title": "1Day Sooner",
"path": "/knowledge-base/organizations/1day-sooner/",
"similarity": 13
},
{
"id": "johns-hopkins-center-for-health-security",
"title": "Johns Hopkins Center for Health Security",
"path": "/knowledge-base/organizations/johns-hopkins-center-for-health-security/",
"similarity": 13
}
]
}
}Entity Data
{
"id": "securebio",
"type": "organization",
"title": "SecureBio",
"description": "A biosecurity nonprofit applying the Delay/Detect/Defend framework to protect against catastrophic pandemics, including AI-enabled biological threats, through wastewater surveillance (Nucleic Acid Observatory) and AI capability evaluations (Virology Capabilities Test). Co-founded by Kevin Esvelt, wh",
"tags": [],
"relatedEntries": [],
"sources": [],
"lastUpdated": "2026-02",
"customFields": []
}Canonical Facts (0)
No facts for this entity
External Links
No external links
Backlinks (1)
| id | title | type | relationship |
|---|---|---|---|
| ea-biosecurity-scope | Is EA Biosecurity Work Limited to Restricting LLM Biological Use? | analysis | — |
Frontmatter
{
"title": "SecureBio",
"description": "A biosecurity nonprofit applying the Delay/Detect/Defend framework to protect against catastrophic pandemics, including AI-enabled biological threats, through wastewater surveillance (Nucleic Acid Observatory) and AI capability evaluations (Virology Capabilities Test). Co-founded by Kevin Esvelt, who also co-founded the legally separate SecureDNA synthesis screening initiative.",
"sidebar": {
"order": 1
},
"quality": 65,
"lastEdited": "2026-01-31",
"importance": 48,
"update_frequency": 45,
"ratings": {
"novelty": 7,
"rigor": 6,
"actionability": 7,
"completeness": 5
},
"clusters": [
"biorisks",
"community",
"ai-safety",
"governance"
],
"subcategory": "biosecurity-orgs",
"entityType": "organization"
}Raw MDX Source
---
title: SecureBio
description: A biosecurity nonprofit applying the Delay/Detect/Defend framework to protect against catastrophic pandemics, including AI-enabled biological threats, through wastewater surveillance (Nucleic Acid Observatory) and AI capability evaluations (Virology Capabilities Test). Co-founded by Kevin Esvelt, who also co-founded the legally separate SecureDNA synthesis screening initiative.
sidebar:
order: 1
quality: 65
lastEdited: "2026-01-31"
importance: 48
update_frequency: 45
ratings:
novelty: 7
rigor: 6
actionability: 7
completeness: 5
clusters:
- biorisks
- community
- ai-safety
- governance
subcategory: biosecurity-orgs
entityType: organization
---
import {DataInfoBox, EntityLink, DataExternalLinks, KeyPeople, KeyQuestions, Section} from '@components/wiki';
{/* Note: SecureBio entity not yet in data layer */}
## Quick Assessment
| Dimension | Assessment | Evidence |
|-----------|------------|----------|
| **Focus Area** | AI-bio risks + pathogen surveillance | Two divisions: AI team (capability evals) and NAO (wastewater monitoring)[^1] |
| **Funding** | Well-funded by EA sources | ≈\$9.4M from <EntityLink id="E552">Coefficient Giving</EntityLink> across multiple grants[^2][^3][^4] |
| **Policy Influence** | Growing | Virology Capabilities Test adopted by major AI labs; NIST AI Safety Consortium member[^5] |
| **Team** | ≈25 staff | Experienced leadership from MIT, biosecurity, and operations backgrounds[^6] |
| **Key Concern** | Information hazard balance | Biosecurity research inherently involves dual-use knowledge[^7] |
## Overview
SecureBio is a 501(c)(3) nonprofit founded in 2022 by Kevin Esvelt, an MIT professor who invented CRISPR-based gene drive technology.[^1] The organization works to protect against catastrophic pandemics—both natural and engineered—through a three-part framework: **Delay** dangerous capability <EntityLink id="E232">proliferation</EntityLink>, **Detect** novel pathogens early, and **Defend** through physical and institutional interventions.[^8]
The organization operates two main divisions. The **AI & Biotechnology Risks** team, led by Seth Donoughe, evaluates how AI systems might accelerate biological threats and develops benchmarks for AI labs. The **Nucleic Acid Observatory (NAO)**, led by Jeff Kaufman, pioneers wastewater-based metagenomic surveillance to detect novel pathogens before they spread widely.[^6]
SecureBio occupies a distinctive position in the biosecurity landscape by explicitly focusing on *catastrophic* and *existential* biological risks rather than routine public health threats. The organization also bridges the AI safety and biosecurity communities, recognizing that advances in AI could significantly lower barriers to creating engineered pathogens.[^5] See the <EntityLink id="responses/biosecurity">Biosecurity Interventions</EntityLink> overview for how SecureBio fits within the broader intervention portfolio.
## History
### Founding and Early Development (2022-2024)
SecureBio was founded by Kevin Esvelt, who brought credibility from his work on gene drive technology and his security-minded approach to biotechnology. Esvelt had previously argued that the "security mindset was not sufficiently present in most of the bioengineering industry."[^9]
The organization quickly attracted funding from <EntityLink id="E552">Coefficient Giving</EntityLink>, receiving multiple grants:
- **\$4,000,000** for general support over three years[^2]
- **\$3,430,000** for the Nucleic Acid Observatory program[^3]
- **\$1,420,937** for biosecurity research over three years[^4]
- **\$570,000** for pathogen early warning systems[^10]
### Leadership Transition (2024-Present)
In September 2024, Kevin Esvelt stepped back from SecureBio's Board to focus on his MIT professorship and academic research, though he remains involved as a co-founder.[^11] Dr. Benjamin Mueller, formerly COO, became Executive Director and Chairman of the Board.
The board was restructured with new members bringing diverse expertise:
- **Christine Parthemore** - CEO of Council on Strategic Risks, national security expert on WMD/CBRN threats
- **Michael Specter** - Staff writer at *The New Yorker*, science and public health journalist
- **Liv Boeree** - Science communicator and philanthropist[^11]
## Core Framework: Delay, Detect, Defend
### Delay
The Delay pillar focuses on slowing the spread of dangerous biological capabilities to potential bad actors. Key initiatives include:
**<EntityLink id="E564">SecureDNA</EntityLink>**: A legally separate Swiss nonprofit co-founded by Esvelt, providing free DNA synthesis screening software to prevent dangerous pathogen creation. The system can detect hazardous genetic sequences down to 30 base pairs.[^11]
**AI Capability Restrictions**: Working with AI labs to implement appropriate access controls for models that could assist with biological weapon development.[^5]
### Detect
The Detect pillar centers on early warning systems for novel biological threats:
**Nucleic Acid Observatory (NAO)**: Uses untargeted metagenomic sequencing of wastewater to identify new and unknown pathogens—including those with extended incubation periods that evade symptom-based surveillance. As of November 2025, NAO monitors 31 sampling sites across 19 US cities and 5 US military facilities in the Indo-Pacific region, sequencing approximately 60 billion read pairs weekly.[^11][^14]
The approach is deliberately *pathogen-agnostic*, meaning it can detect threats regardless of whether they are natural, accidentally released, or deliberately engineered.[^1]
### Defend
The Defend pillar develops physical and institutional measures to prevent infection and maintain <EntityLink id="E284">societal resilience</EntityLink>:
**Far-UVC Research**: Previously investigated germicidal ultraviolet light as a passive defense against airborne pathogens (currently inactive).[^8]
**Pandemic-Proof PPE**: Advocated for improved personal protective equipment standards.[^8]
## AI & Biotechnology Risks Program
SecureBio's AI team has become increasingly prominent as concerns about AI-enabled bioterrorism have grown.
### Virology Capabilities Test
The team developed the **Virology Capabilities Test (VCT)**, a benchmark measuring AI models' ability to assist with complex virology tasks. A 2025 study using the VCT found that <EntityLink id="E218">OpenAI</EntityLink>'s o3 model "outperformed 94 percent of expert virologists" on troubleshooting complex lab protocols, demonstrating an "urgent need for thoughtful access controls."[^12]
The VCT is now used by major AI laboratories for pre-release safety evaluations.[^11]
### NIST Engagement
SecureBio participates in the NIST US AI Safety Consortium and submitted formal recommendations including:[^5]
- Ensuring the AI Risk Management Framework addresses biosecurity risks from foundation models
- Developing evaluations for CBRN risks including static benchmarks, model-graded evaluations, and task-based evaluations
- Assessing biological design tools (BDTs) that could lower barriers for non-experts
### Collaboration with AI Labs
SecureBio collaborates with frontier AI companies including <EntityLink id="E22">Anthropic</EntityLink> to build evaluation tools and mitigation strategies. Notably, former SecureBio Research Scientist Anjali Gopal, who co-led the AI project, moved to Anthropic's technical staff.[^6]
## Team and Organization
### Current Leadership
<Section title="Leadership Team">
<KeyPeople people={[
{ name: "Ben Mueller", role: "Executive Director & Board Chair", background: "Former COO; fintech strategy/operations background" },
{ name: "Seth Donoughe", role: "Director of AI", background: "Leads AI & Biotechnology Risks division" },
{ name: "Jeff Kaufman", role: "Director of NAO", background: "Leads Nucleic Acid Observatory; pathogen early warning" },
{ name: "Alvaro Morales", role: "Director of Operations", background: "Administrative and operational leadership" }
]} />
</Section>
### Team Structure
| Division | Head | Team Size | Focus |
|----------|------|-----------|-------|
| **AI & Biotechnology Risks** | Seth Donoughe | ≈12 | AI capability evaluations, policy recommendations |
| **Nucleic Acid Observatory** | Jeff Kaufman | ≈15 | Wastewater surveillance, metagenomic sequencing |
| **Operations** | Alvaro Morales | ≈3 | Administration, finance, HR |
Key researchers include Jasper Götting (Head of Research, AI team), Will Bradshaw (Head of Computational Programs, NAO), and James Kremer (Head of Laboratory Science, NAO).[^6]
### Notable Departures
- **Kevin Esvelt** - Stepped back September 2024 to focus on MIT professorship[^11]
- **Anjali Gopal** - Former AI Project Co-Lead, now at <EntityLink id="E22">Anthropic</EntityLink>[^6]
## Funding
### Major Grants
| Funder | Amount | Purpose | Year |
|--------|--------|---------|------|
| <EntityLink id="E552">Coefficient Giving</EntityLink> | \$4,000,000 | General support (3 years) | 2023 |
| Coefficient Giving | \$3,430,000 | Nucleic Acid Observatory | 2023 |
| Coefficient Giving | \$1,420,937 | Biosecurity research (3 years) | 2022 |
| Coefficient Giving | \$570,000 | Pathogen early warning | 2022 |
| **Total identified** | **≈\$9,420,937** | | |
### Operating Costs
The AI-Bio evaluation project alone costs approximately **\$700,000 per 6 months**, covering team leads, policy scientists, contractors, and research assistants.[^13] As of early 2024, SecureBio was actively seeking additional funding to sustain operations.[^13]
## Criticisms and Concerns
### Information Hazard Dilemma
SecureBio's work inherently involves dual-use knowledge—understanding how to defend against biological threats requires understanding the threats themselves. The broader biosecurity community remains divided on how to handle such information hazards.[^7]
Some experts argue that emphasizing certain risks could "fuel threats rather than mitigate them"—citing how Al Qaeda's <EntityLink id="E42">bioweapons</EntityLink> efforts reportedly began after the United States publicly highlighted biosecurity vulnerabilities.[^7]
SecureBio has been praised for being "mindful of information hazards" in its approach,[^9] but the tension between transparency and security remains unresolved in the field.
### Scope Limitations
Evaluators have noted that SecureBio's expertise "concentrates on biological threats, not financial systems or weapons infrastructure vulnerabilities."[^13] The organization is not positioned to address all catastrophic risks.
### Funding Dependence
Like many EA-aligned organizations, SecureBio relies heavily on Coefficient Giving funding. This concentration creates sustainability risks and potential influence concerns, though no specific issues have been raised.
## Key Uncertainties
<KeyQuestions questions={[
"Can DNA synthesis screening scale globally before advanced AI models make biological weapon creation significantly easier?",
"Will wastewater surveillance provide sufficient warning time to prevent catastrophic pandemic spread?",
"How should AI labs balance capability advancement with biosecurity concerns?",
"Can information hazard frameworks be developed that enable defensive research without enabling attacks?",
"Will SecureBio's AI capability evaluations keep pace with rapidly advancing AI systems?"
]} />
## Relationship to AI Safety
SecureBio represents an important bridge between the biosecurity and AI safety communities. The organization's work on AI capability evaluations directly addresses concerns about <EntityLink id="E392">AI misuse risks</EntityLink>, specifically the potential for AI to lower barriers to biological weapon creation.
The fact that SecureBio staff have moved to AI labs (Anjali Gopal to Anthropic) and that AI labs use SecureBio's evaluation tools suggests meaningful knowledge transfer between these communities.
## Sources
[^1]: [SecureBio Official Website](https://securebio.org/) - Organization overview and mission
[^2]: [Coefficient Giving - SecureBio General Support](https://www.openphilanthropy.org/grants/securebio-general-support/) - \$4M grant
[^3]: [Coefficient Giving - SecureBio NAO](https://www.openphilanthropy.org/grants/securebio-nucleic-acid-observatory/) - \$3.43M grant
[^4]: [Coefficient Giving - Biosecurity Research](https://www.openphilanthropy.org/grants/securebio-biosecurity-research/) - \$1.42M grant
[^5]: [SecureBio NIST RFI Submission](https://www.nist.gov/document/ai-eo-14110-rfi-comments-securebio) - AI policy recommendations
[^6]: [SecureBio Team Page](https://securebio.org/team/) - Staff and organizational structure
[^7]: [Bulletin of the Atomic Scientists - Biosecurity Community Divided](https://thebulletin.org/2025/03/biosecurity-community-divided-over-best-ways-to-mitigate-risks/) - Information hazard debates
[^8]: [Founders Pledge - SecureBio Assessment](https://www.founderspledge.com/research/secure-bio) - Impact evaluation
[^9]: [EA Forum - SecureBio Notes from SoGive](https://forum.effectivealtruism.org/posts/C2ygbqWWzF8MGfcoL/) - Independent assessment
[^10]: [Coefficient Giving - Pathogen Early Warning](https://www.openphilanthropy.org/grants/securebio-pathogen-early-warning-project/) - \$570K grant
[^11]: [Letter from the Executive Director - SecureBio Substack](https://securebio.substack.com/p/letter-from-the-executive-director) - Leadership transition and current status
[^12]: [Virology Capabilities Test Paper](https://arxiv.org/html/2504.16137v2) - AI virology benchmark results
[^13]: [SoGive Notes on SecureBio](https://forum-bots.effectivealtruism.org/posts/C2ygbqWWzF8MGfcoL/) - Funding and operational details
[^14]: [NAO Updates November 2025](https://securebio.substack.com/p/nao-updates-november-2025) - Current scale and detections