Back
IST: Managing Misuse - IST Submits Comments
websecurityandtechnology.org·securityandtechnology.org/blog/managing-misuse-ist-submit...
This is a policy submission to NIST from IST, relevant to those tracking U.S. AI governance efforts and dual-use foundation model risk management standards, particularly NIST AI 800-1.
Metadata
Importance: 45/100policy briefcommentary
Summary
The Institute for Security and Technology (IST) submitted comments to NIST on draft guidelines for managing misuse risk in dual-use foundation models (NIST AI 800-1), drawing on multi-year research and a 20-organization working group. IST advocates for a more granular AI Lifecycle Framework, new evaluation practices for misuse identification, and enhanced insider threat practices. The submission also addresses ROI considerations for adhering to misuse risk management best practices.
Key Points
- •IST recommends NIST adopt a more granular AI Lifecycle Framework reflecting real-world stages of AI development and deployment for targeted risk reduction.
- •Proposes a new evaluation practice to assess the performance of misuse identification and mitigation efforts across the AI lifecycle.
- •Recommends specific enhancements to NIST AI 800-1's existing practices on insider threats within AI organizations.
- •Comments draw from four IST reports spanning AI risk reduction, cybersecurity implications, and compliance failure patterns from 2024-2025.
- •Discusses ROI considerations for AI labs and organizations adhering to misuse risk management best practices, addressing adoption barriers.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| NIST and AI Safety | Organization | 63.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 202615 KB
[Skip to content](https://securityandtechnology.org/blog/managing-misuse-ist-submits-comments/#content)
## [AI Risk Reduction Initiative](https://securityandtechnology.org/pillar_project/future-of-digital-security/ai-risk-reduction-initiative/)
# Managing Misuse Risk for Dual-Use Foundation Models: IST Submits Comments to a NIST Request for Information
## [Blog](https://securityandtechnology.org/blog)
## March 18, 2025
Last week, Institute for Security and Technology (IST) submitted a response to NIST's Request for Comments on the U.S. Artificial Intelligence Safety Institute's draft guidelines for identifying and mitigating the risks to public safety and national security present across the AI lifecycle.

The Institute for Security and Technology submitted comments in response to the National Institute of Standards and Technology (NIST) on the second draft of itse guidelines on [Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1)](https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.800-1.ipd2.pdf).
For the last few years, IST has been engaging with a diverse range of stakeholders across the AI ecosystem to better understand the emerging risks of AI foundation models and to develop technical- and policy-oriented risk reduction strategies, driving forward responsible innovation. Our comments were informed by a multi-year research effort and a series of convenings involving a working group of 20 stakeholders from leading AI labs, industry, academia, and civil society. Namely, our submission contains elements of the following IST reports:
- [_A Lifecycle Approach to AI Risk Reduction: Tackling the Risk of Malicious Use Amid Implications of Openness_](https://securityandtechnology.org/virtual-library/reports/a-lifecycle-approach-to-ai-risk-reduction/) (June 2024);
- [_The Implications of Artificial Intelligence in Cybersecurity_](https://securityandtechnology.org/virtual-library/reports/the-implications-of-artificial-intelligence-in-cybersecurity/) (October 2024);
- [_Navigating AI Compliance: Tracing Failure Patterns in History_](https://securityandtechnology.org/virtual-library/reports/navigating-ai-compliance-part-1/) (December 2024); and
- _[Navigating AI Compliance: Risk Mitigation Strategies for Safeguarding Against Future Failures](https://securityandtechnology.org/virtual-library/reports/navigating-ai-compliance-part-2/)_ (March 2025).
In our response, we highlight the importance of adopting a more granular AI Lifecycle Framework resembling the real-world stages of AI development and deployment to serve as a framework for implementing targeted risk reduction strategies. We also propose adding a new practice to evaluate the performance of misuse identification and mitigation efforts and recommend specific additions to the outlined practices on insider threats, among others. Finally, IST shares additional general findings regarding the challenges of managing misuse risks and the potential Return on
... (truncated, 15 KB total)Resource ID:
283099e2bdd57fe3 | Stable ID: MWMyNGE1Mm