The AI and Biological Weapons Threat
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: RAND Corporation
A 2023 RAND empirical study directly relevant to catastrophic risk from AI misuse; provides early evidence on LLM dual-use risks in bioweapons contexts, informing debates about frontier model deployment safeguards and biosecurity policy.
Metadata
Summary
This RAND Corporation report examines the misuse risks of large language models (LLMs) in biological weapons development through a red-team methodology. Preliminary findings show that while LLMs haven't provided explicit weapon-creation instructions, they do offer guidance useful for planning biological attacks, including agent selection and acquisition strategies. The authors caution that AI's rapid advancement may outpace regulatory oversight, closing historical information gaps that previously hindered bioweapon development.
Key Points
- •LLMs did not generate explicit bioweapon instructions but provided actionable planning guidance including agent identification and distribution strategies.
- •In a plague pandemic scenario, an LLM assessed obtaining and distributing Yersinia pestis while estimating variables affecting projected death tolls.
- •In a botulinum toxin scenario, an LLM suggested aerosol delivery methods and proposed cover stories for acquiring dangerous biological agents.
- •AI advancement may outpace regulatory oversight, closing information gaps that previously caused biological attacks to fail.
- •The full real-world operational impact of LLMs on bioweapon planning remains an open research question requiring further study.
Cited by 4 pages
| Page | Type | Quality |
|---|---|---|
| AGI Development | -- | 52.0 |
| International AI Coordination Game Model | Analysis | 59.0 |
| AI Risk Interaction Matrix | Analysis | 65.0 |
| Bioweapons Risk | Risk | 91.0 |
Cached Content Preview
- [Skip to page content](https://www.rand.org/pubs/research_reports/RRA2977-1.html#page-content)
In this report, the authors address the emerging issue of identifying and mitigating the risks posed by the misuse of artificial intelligence (AI)—specifically, large language models—in the context of biological attacks and present preliminary findings of their research. They find that while AI can generate concerning text, the operational impact is a subject for future research.
# The Operational Risks of AI in Large-Scale Biological Attacks
A Red-Team Approach
[Christopher A. Mouton](https://www.rand.org/about/people/m/mouton_christopher_a.html), [Caleb Lucas](https://www.rand.org/pubs/authors/l/lucas_caleb.html), [Ella Guest](https://www.rand.org/pubs/authors/g/guest_ella.html)
ResearchPublished Oct 16, 2023
- [Share on LinkedIn](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.rand.org%2Fpubs%2Fresearch_reports%2FRRA2977-1.html&title=The+Operational+Risks+of+AI+in+Large-Scale+Biological+Attacks%3A+A+Red-Team+Approach)
- [Share on X](https://x.com/intent/tweet?url=https%3A%2F%2Fwww.rand.org%2Fpubs%2Fresearch_reports%2FRRA2977-1.html&text=The+Operational+Risks+of+AI+in+Large-Scale+Biological+Attacks%3A+A+Red-Team+Approach&via=RANDCorporation)
- [Share on Facebook](https://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.rand.org%2Fpubs%2Fresearch_reports%2FRRA2977-1.html)
- [Email](mailto:?subject=RAND%3A%20The%20Operational%20Risks%20of%20AI%20in%20Large-Scale%20Biological%20Attacks&body=From%20RAND%0A%0AThe%20Operational%20Risks%20of%20AI%20in%20Large-Scale%20Biological%20Attacks%0A%0AIn%20this%20report%2C%20the%20authors%20address%20the%20emerging%20issue%20of%20identifying%20and%20mitigating%20the%20risks%20posed%20by%20the%20misuse%20of%20artificial%20intelligence%E2%80%94specifically%2C%20large%20language%20models%E2%80%94in%20the%20context%20of%20biological%20attacks%20and%20present%20preliminary%20findings.%0A%0Ahttps%3A%2F%2Fwww.rand.org%2Fpubs%2Fresearch_reports%2FRRA2977-1.html)
The rapid advancement of artificial intelligence (AI) has far-reaching implications across multiple domains, including its potential to be applied in the development of advanced biological weapons. The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations. Previous biological attacks that failed because of a lack of information might succeed in a world in which AI tools have access to all of the information needed to bridge that information gap.
The authors of this report look at the emerging issue of identifying and mitigating the risks posed by the misuse of AI—specifically, large language models (LLMs)—in the context of biological attacks. They present preliminary findings of their research and examine future paths for that research as AI and LLMs gain sophistication and speed.
## Key Findings
- In experiments to d
... (truncated, 7 KB total)73c1b835c41bcbdb | Stable ID: YmQyZDAyMz