voluntary commitments only lead to socially beneficial outcomes when combined with enforcement mechanisms
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: ScienceDirect
Relevant to ongoing debates about whether voluntary AI safety commitments from labs are sufficient, or whether binding regulation and enforcement are needed to ensure compliance and meaningful safety outcomes.
Metadata
Summary
This paper examines the limitations of voluntary commitments in producing socially beneficial outcomes, arguing that such commitments are insufficient without accompanying enforcement mechanisms. It analyzes the conditions under which voluntary pledges fail to achieve their intended goals and explores how enforcement structures can bridge this gap. The findings have direct implications for AI governance debates around voluntary safety commitments by industry actors.
Key Points
- •Voluntary commitments alone are generally insufficient to produce socially beneficial outcomes due to lack of accountability.
- •Enforcement mechanisms are necessary complements to voluntary pledges to ensure compliance and meaningful impact.
- •The paper draws on social science and governance theory to analyze when and why self-regulation fails.
- •Findings challenge the effectiveness of industry-led voluntary AI safety commitments without regulatory backing.
- •Suggests that hybrid governance models combining voluntary action with enforcement are more likely to succeed.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Policy Effectiveness | Analysis | 64.0 |
| Voluntary AI Safety Commitments | Policy | 91.0 |
Cached Content Preview
[Skip to main content](https://www.sciencedirect.com/science/article/abs/pii/S0160791X21003183#screen-reader-main-content) [Skip to article](https://www.sciencedirect.com/science/article/abs/pii/S0160791X21003183#screen-reader-main-title)
- [Access through **your organization**](https://www.sciencedirect.com/user/institution/login?targetUrl=%2Fscience%2Farticle%2Fpii%2FS0160791X21003183)
- [Purchase PDF](https://www.sciencedirect.com/getaccess/pii/S0160791X21003183/purchase)
Search ScienceDirect
## Article preview
- [Abstract](https://www.sciencedirect.com/science/article/abs/pii/S0160791X21003183#preview-section-abstract)
- [Introduction](https://www.sciencedirect.com/science/article/abs/pii/S0160791X21003183#preview-section-introduction)
- [Section snippets](https://www.sciencedirect.com/science/article/abs/pii/S0160791X21003183#preview-section-snippets)
- [References (63)](https://www.sciencedirect.com/science/article/abs/pii/S0160791X21003183#preview-section-references)
- [Cited by (43)](https://www.sciencedirect.com/science/article/abs/pii/S0160791X21003183#preview-section-cited-by)
[](https://www.sciencedirect.com/journal/technology-in-society "Go to Technology in Society on ScienceDirect")
## [Technology in Society](https://www.sciencedirect.com/journal/technology-in-society "Go to Technology in Society on ScienceDirect")
[Volume 68](https://www.sciencedirect.com/journal/technology-in-society/vol/68/suppl/C "Go to table of contents for this volume/issue"), February 2022, 101843
[](https://www.sciencedirect.com/journal/technology-in-society/vol/68/suppl/C)
# Voluntary safety commitments provide an escape from over-regulation in AI development
Author links open overlay panelThe AnhHana, TomLenaertsbcde, Francisco C.Santosf, Luís MonizPereirag
Show more
Add to Mendeley
Cite
[https://doi.org/10.1016/j.techsoc.2021.101843](https://doi.org/10.1016/j.techsoc.2021.101843 "Persistent link using digital object identifier") [Get rights and content](https://s100.copyright.com/AppDispatchServlet?publisherName=ELS&contentID=S0160791X21003183&orderBeanReset=true)
## Highlights
- •
We study a game-theoretical model of voluntary safety commitments for regulating safety behavior in AI development.
- •
Socially unwanted outcomes may be produced when sanctioning is applied to unconditionally unsafe behaviors.
- •
.Voluntary but sanctionable commitments generate socially beneficial outcomes in all scenarios.
- •
Enforceable soft law techniques or co-regulatory mechanisms may impact the ambitions of developers in AI-based applications.
## Abstract
With the introduction of Artificial Intelligence (AI) and related technologies in our daily lives, fear and anxiety about their misuse as well as their inherent biases, inc
... (truncated, 22 KB total)cca85af69dffa3bd | Stable ID: NWZiMGJjY2