Skip to content
Longterm Wiki
Back

Misalignment or misuse? The AGI alignment tradeoff

paper

Authors

Max Hellrigel-Holderbaum·Leonard Dung

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

This paper analyzes the fundamental tension between AGI alignment and misuse risks, arguing that both pose severe catastrophic threats and examining how these risks relate to each other in the context of developing safe and beneficial artificial general intelligence.

Paper Details

Citations
3
0 influential
Year
2025

Metadata

arxiv preprintprimary source

Abstract

Creating systems that are aligned with our goals is seen as a leading approach to create safe and beneficial AI in both leading AI companies and the academic field of AI safety. We defend the view that misaligned AGI - future, generally intelligent (robotic) AI agents - poses catastrophic risks. At the same time, we support the view that aligned AGI creates a substantial risk of catastrophic misuse by humans. While both risks are severe and stand in tension with one another, we show that - in principle - there is room for alignment approaches which do not increase misuse risk. We then investigate how the tradeoff between misalignment and misuse looks empirically for different technical approaches to AI alignment. Here, we argue that many current alignment techniques and foreseeable improvements thereof plausibly increase risks of catastrophic misuse. Since the impacts of AI depend on the social context, we close by discussing important social factors and suggest that to reduce the risk of a misuse catastrophe due to aligned AGI, techniques such as robustness, AI control methods and especially good governance seem essential.

Summary

This paper examines the tension between two catastrophic risks posed by advanced AI: misalignment (AGI pursuing unintended goals) and misuse (humans weaponizing aligned AGI). The authors argue that while both risks are severe, alignment approaches need not inherently increase misuse risk. However, they find empirically that many current alignment techniques plausibly increase catastrophic misuse potential. The paper concludes that addressing misuse risks from aligned AGI requires complementary approaches including robustness, AI control methods, and strong governance frameworks alongside traditional alignment work.

Cited by 1 page

PageTypeQuality
Agentic AICapability68.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20266 KB
# Computer Science > Computers and Society

**arXiv:2506.03755** (cs)


\[Submitted on 4 Jun 2025\]

# Title:Misalignment or misuse? The AGI alignment tradeoff

Authors: [Max Hellrigel-Holderbaum](https://arxiv.org/search/cs?searchtype=author&query=Hellrigel-Holderbaum,+M), [Leonard Dung](https://arxiv.org/search/cs?searchtype=author&query=Dung,+L)

View a PDF of the paper titled Misalignment or misuse? The AGI alignment tradeoff, by Max Hellrigel-Holderbaum and Leonard Dung

[View PDF](https://arxiv.org/pdf/2506.03755)

> Abstract:Creating systems that are aligned with our goals is seen as a leading approach to create safe and beneficial AI in both leading AI companies and the academic field of AI safety. We defend the view that misaligned AGI - future, generally intelligent (robotic) AI agents - poses catastrophic risks. At the same time, we support the view that aligned AGI creates a substantial risk of catastrophic misuse by humans. While both risks are severe and stand in tension with one another, we show that - in principle - there is room for alignment approaches which do not increase misuse risk. We then investigate how the tradeoff between misalignment and misuse looks empirically for different technical approaches to AI alignment. Here, we argue that many current alignment techniques and foreseeable improvements thereof plausibly increase risks of catastrophic misuse. Since the impacts of AI depend on the social context, we close by discussing important social factors and suggest that to reduce the risk of a misuse catastrophe due to aligned AGI, techniques such as robustness, AI control methods and especially good governance seem essential.

|     |     |
| --- | --- |
| Comments: | Forthcoming in Philosophical Studies |
| Subjects: | Computers and Society (cs.CY); Artificial Intelligence (cs.AI) |
| Cite as: | [arXiv:2506.03755](https://arxiv.org/abs/2506.03755) \[cs.CY\] |
|  | (or [arXiv:2506.03755v1](https://arxiv.org/abs/2506.03755v1) \[cs.CY\] for this version) |
|  | [https://doi.org/10.48550/arXiv.2506.03755](https://doi.org/10.48550/arXiv.2506.03755)<br>Focus to learn more<br>arXiv-issued DOI via DataCite |

## Submission history

From: Max Hellrigel-Holderbaum \[ [view email](https://arxiv.org/show-email/83cf18fe/2506.03755)\]

**\[v1\]**
Wed, 4 Jun 2025 09:22:37 UTC (431 KB)

Full-text links:

## Access Paper:

View a PDF of the paper titled Misalignment or misuse? The AGI alignment tradeoff, by Max Hellrigel-Holderbaum and Leonard Dung

- [View PDF](https://arxiv.org/pdf/2506.03755)

[![license icon](https://arxiv.org/icons/licenses/by-4.0.png)view license](http://creativecommons.org/licenses/by/4.0/ "Rights to this article")

Current browse context:

cs.CY

[< prev](https://arxiv.org/prevnext?id=2506.03755&function=prev&context=cs.CY "previous in cs.CY (accesskey p)")  \|  [next >](https://arxiv.org/prevnext?id=2506.03755&function=next&context=cs.CY "next in cs.CY (accesskey n)")

[new](https://arxiv.org/list/cs.CY/new) \|

... (truncated, 6 KB total)
Resource ID: bb34533d462b5822 | Stable ID: YTcyZGIyOD