FHI publication guidelines
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Future of Humanity Institute
Published by FHI's GovAI blog, this piece is relevant to discussions on responsible disclosure, dual-use research of concern, and how the ML community should govern the publication of sensitive or dangerous findings.
Metadata
Summary
This resource from Oxford's Future of Humanity Institute (FHI) and Centre for the Governance of AI outlines recommended publication norms for machine learning researchers, addressing how and when to publish potentially dangerous AI capabilities research. It proposes frameworks for assessing dual-use risks and considering staged or restricted disclosure. The guidelines aim to balance scientific openness with responsible stewardship of potentially harmful information.
Key Points
- •Argues that ML researchers have a responsibility to weigh potential harms before publishing sensitive capabilities research.
- •Proposes a risk-assessment framework for dual-use ML research, considering factors like uplift to bad actors and counterfactual availability.
- •Recommends staged disclosure, pre-publication review, and coordination with affected stakeholders for high-risk research.
- •Acknowledges tension between open science norms and safety, and seeks practical middle-ground rather than blanket suppression.
- •Situates ML publication norms within broader biosecurity and information-hazard literature and precedents.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Proliferation Risk Model | Analysis | 65.0 |
a5ee696da305a1ce | Stable ID: ZWEzZmEzOW