Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: RAND Corporation

RAND research reports on AI and bioweapons risk are directly relevant to frontier AI evaluation policy, particularly debates around capability thresholds used in safety frameworks like Anthropic's RSP or OpenAI's preparedness framework.

Metadata

Importance: 62/100organizational reportanalysis

Summary

This RAND Corporation research report examines the risk of AI systems providing meaningful uplift to actors seeking to develop biological weapons, focusing on how to assess capability thresholds and decompose the problem for evaluation purposes. It likely provides a framework for analyzing when AI crosses dangerous capability boundaries in the bioweapons domain and how to structure risk assessments accordingly.

Key Points

  • Examines methods for assessing whether AI systems provide meaningful uplift toward bioweapon development
  • Uses decomposition approaches to break down complex capability thresholds into evaluable components
  • Addresses probability estimation for dangerous AI-enabled biosecurity risks
  • Relevant to red-teaming and evaluation frameworks for frontier AI models with dangerous capability concerns
  • Informs policy and governance decisions around AI deployment restrictions in sensitive domains

Cited by 7 pages

Cached Content Preview

HTTP 200Fetched Mar 15, 20266 KB
The Operational Risks of AI in Large-Scale Biological Attacks: Results of a Red-Team Study | RAND 
 
 

 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 

 
 
 

 
 Skip to page content 

 

 

 
 

 
 

 The authors of this report share final results of a study of potential risks of artificial intelligence (AI) in the context of biological weapon attacks. The authors sought to identify potential risks posed by AI misuse, generate policy insights to mitigate any risks, and contribute to responsible AI development. The findings indicate that using the existing generation of large language models did not measurably change the risk of such an attack.

 
 

 

 
 
 

 The Operational Risks of AI in Large-Scale Biological Attacks

 Results of a Red-Team Study


 Christopher A. Mouton , Caleb Lucas , Ella Guest 

 
 Research Published Jan 25, 2024 

 
 
 
 
 
 

 
 
 
 
 
 

 
 
 
 Download PDF 
 
 
 
 
 
 
 

 
 

 

 
 Share on LinkedIn 
 Share on X 
 Share on Facebook 
 Email 
 
 
 
 The rapid advancement of artificial intelligence (AI) has far-reaching implications across multiple domains, including concern regarding the potential development of biological weapons. This potential application of AI raises particular concerns because it is accessible to nonstate actors and individuals. The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations.


 In this report, the authors share final results of a study of the potential risks of using large language models (LLMs) in the context of biological weapon attacks. They conducted an expert exercise in which teams of researchers role-playing as malign nonstate actors were assigned to realistic scenarios and tasked with planning a biological attack; some teams had access to an LLM along with the internet, and others were provided only access to the internet. The authors sought to identify potential risks posed by LLM misuse, generate policy insights to mitigate any risks, and contribute to responsible LLM development. The findings indicate that using the existing generation of LLMs did not measurably change the operational risk of such an attack.

 
 



 Key Findings


 
 This research involving multiple LLMs indicates that biological weapon attack planning currently lies beyond the capability frontier of LLMs as assistive tools. The authors found no statistically significant difference in the viability of plans generated with or without LLM assistance.

 This research did not measure the distance between the existing LLM capability frontier and the knowledge needed for biological weapon attack planning. Given the rapid evolution of AI, it is prudent to monitor future developments in LLM technology and the potential risks associated with its application to biological weapon attack planning.

 Although the authors iden

... (truncated, 6 KB total)
Resource ID: 0fe4cfa7ca5f2270 | Stable ID: YmM4YmE2NG