SecureBio NIST RFI Submission
governmentCredibility Rating
Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.
Rating inherited from publication venue: NIST
This is a formal policy submission responding to a NIST Request for Information under Executive Order 14110 on AI safety; it represents SecureBio's recommendations for how U.S. standards bodies should address AI-enabled biosecurity risks.
Metadata
Summary
SecureBio's formal submission to NIST argues that AI systems, particularly large language models and biological design tools, pose significant biosecurity risks by lowering barriers to bioweapon development. The organization makes four concrete policy recommendations including integrating biosecurity into the AI Risk Management Framework, developing CBRN evaluation methods, structured red-teaming, and pre-deployment assessment standards with Know-Your-Customer protocols.
Key Points
- •AI tools can lower barriers for non-experts seeking to develop biological weapons by providing expert-level guidance and dual-use information.
- •Recommends integrating biosecurity risk discussion explicitly into NIST's AI Risk Management Framework (AI RMF).
- •Calls for development of comprehensive, standardized evaluation methods specifically for CBRN (chemical, biological, radiological, nuclear) risks.
- •Advocates for structured red-teaming exercises focused on biosecurity threat scenarios prior to model deployment.
- •Proposes pre-deployment assessment standards and Know-Your-Customer (KYC) protocols for high-risk AI applications in biotechnology.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| SecureBio | Organization | 65.0 |
Cached Content Preview
Feb 2 2024, _Re: Request for Information by NIST on its E.O. 14110 Responsibilities_
To the National Institute of Standards and Technology,
Thank you for the opportunity to provide feedback on information relevant to RFI published by NIST regarding the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (E.O. 14110). We at SecureBio are deeply interested in helping NIST carry out its responsibilities under the E.O. to ensure the field of AI, particularly at its intersection with biotechnology, advances in a manner that is both safe and beneficial for society.
SecureBio (and its affiliated MIT research group Sculpting Evolution) is a non-profit biosecurity research organization located in Cambridge, MA, specializing in technical research to mitigate risks from catastrophic pandemics driven by advances in dual-use synthetic biology and bioengineering. Due to rapid progress in artificial intelligence and machine learning in the past year, we have expanded our technical team to investigate risks of misuse at the intersection of AI and biotechnology, with an emphasis on risks from frontier AI models, such as large language models.
In response to the RFI, we propose four key recommendations for NIST’s consideration:
1. Ensure the AI Risk Management Framework discusses biosecurity risks from foundation models and biological design tools (BDTs)
2. Evaluations for CBRN risks from AI should include static benchmarks, model-graded evaluations and task-based evaluations to both assess models’ raw capabilities and dissemination of dual-use information.
3. Conduct AI red-teaming exercises assess biosecurity risks from a diverse set of actors, and construct them in a manner that facilitates structured, scalable evaluation while allowing for creativity in red-teamers approaches.
4. Establish standards that involve comprehensive risk assessments, rigorous pre-deployment evaluations of AI models, adherence to Know-Your-Customer standards, and specific guidelines for Biological Design Tools (BDTs) to effectively manage biosecurity risks associated with AI tools.
We expand on each of these recommendations in the document below. If you have any questions about the attached text, please do not hesitate to contact us.
Kind regards,
Kevin M. Esvelt Anjali Gopal Geetha Jeyapragasan Associate Professor, MIT Research Scientist, MIT Graduate Student, MIT [esvelt@media.mit.edu](mailto:esvelt@media.mit.edu) [anjaligo@mit.edu](mailto:anjaligo@mit.edu) [geethaj@mit.edu](mailto:geethaj@mit.edu)
**Risk Mapping and Measurement in the Companion Resource to the AI Risk**
**Management Framework (AI RMF), NIST AI 100–1**
**Recommendation 1: The companion guide to the AI RMF should map out the** **biosecurity risks from LLMs and ors, and should include how these risks may change** **due to the proliferation of laboratory automation tools and outsourcing.**
AI tools have the potential to exacerbate risks associated with the weapo
... (truncated, 24 KB total)df7e48e6d0a50b90 | Stable ID: ZDFiYTFiY2