[2403.03218] The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Relevant for teams building safety pipelines for deployed LLMs; ShieldLM offers a customizable, explainable alternative to black-box content moderation for detecting unsafe model outputs.
Metadata
Abstract
The White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons. To measure these risks of malicious use, government institutions and major AI labs are developing evaluations for hazardous capabilities in LLMs. However, current evaluations are private, preventing further research into mitigating risk. Furthermore, they focus on only a few, highly specific pathways for malicious use. To fill these gaps, we publicly release the Weapons of Mass Destruction Proxy (WMDP) benchmark, a dataset of 3,668 multiple-choice questions that serve as a proxy measurement of hazardous knowledge in biosecurity, cybersecurity, and chemical security. WMDP was developed by a consortium of academics and technical consultants, and was stringently filtered to eliminate sensitive information prior to public release. WMDP serves two roles: first, as an evaluation for hazardous knowledge in LLMs, and second, as a benchmark for unlearning methods to remove such hazardous knowledge. To guide progress on unlearning, we develop RMU, a state-of-the-art unlearning method based on controlling model representations. RMU reduces model performance on WMDP while maintaining general capabilities in areas such as biology and computer science, suggesting that unlearning may be a concrete path towards reducing malicious use from LLMs. We release our benchmark and code publicly at https://wmdp.ai
Summary
ShieldLM introduces a safety detection framework that trains large language models to identify unsafe content in LLM outputs, offering customizable detection rules and explainable reasoning. The system is designed to align with diverse safety standards and provides transparent justifications for its safety judgments, addressing limitations of black-box moderation systems.
Key Points
- •Proposes ShieldLM, a fine-tuned LLM-based safety detector that can identify harmful content in AI-generated text with high accuracy
- •Supports customizable detection rules, allowing adaptation to different safety standards and use-case-specific requirements
- •Provides explainable outputs with natural language reasoning for safety decisions, improving transparency over traditional classifiers
- •Outperforms existing safety detection baselines across multiple benchmarks including bilingual (Chinese and English) safety evaluation
- •Addresses the need for scalable, interpretable content moderation tools as LLM deployment expands
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Capability Unlearning / Removal | Approach | 65.0 |
1 FactBase fact citing this source
kb-59b27799c5de97c1 | Stable ID: sid_S1NsKriRCg