Skip to content
Longterm Wiki
Back

Jailbreak-Tuning: Models Efficiently Learn Jailbreak Susceptibility

paper

Authors

Brendan Murphy·Dillon Bowen·Shahrad Mohammadzadeh·Tom Tseng·Julius Broomfield·Adam Gleave·Kellin Pelrine

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Relevant to debates about the robustness of safety alignment and the risks of open-weight model release and fine-tuning APIs; complements prior work on fine-tuning attacks against RLHF-trained models.

Paper Details

Citations
9
1 influential
Year
2025

Metadata

Importance: 72/100arxiv preprintprimary source

Abstract

AI systems are rapidly advancing in capability, and frontier model developers broadly acknowledge the need for safeguards against serious misuse. However, this paper demonstrates that fine-tuning, whether via open weights or closed fine-tuning APIs, can produce helpful-only models with safeguards destroyed. In contrast to prior work which is blocked by modern moderation systems or achieved only partial removal of safeguards or degraded output quality, our jailbreak-tuning method teaches models to generate detailed, high-quality responses to arbitrary harmful requests. For example, OpenAI, Google, and Anthropic models will fully comply with requests for CBRN assistance, executing cyberattacks, and other criminal activity. We further show that backdoors can increase not only the stealth but also the severity of attacks. Stronger jailbreak prompts become even more effective in fine-tuning attacks, linking attacks and potentially defenses in the input and weight spaces. Not only are current models vulnerable, more recent ones also appear to be becoming even more vulnerable to these attacks, underscoring the urgent need for tamper-resistant safeguards. Until such safeguards are discovered, companies and policymakers should view the release of any fine-tunable model as simultaneously releasing its evil twin: equally capable as the original model, and usable for any malicious purpose within its capabilities.

Summary

This paper demonstrates that fine-tuning language models on a small number of jailbroken examples causes them to rapidly internalize jailbreak susceptibility, dramatically lowering resistance to harmful prompts. The work highlights a critical vulnerability in the fine-tuning pipeline where safety alignment can be efficiently undone, even with limited adversarial data. This raises significant concerns for open-weight models and fine-tuning-as-a-service offerings.

Key Points

  • Fine-tuning on a small set of jailbroken examples is sufficient to substantially degrade safety alignment in LLMs.
  • The effect is efficient and generalizes broadly, meaning models become susceptible to a wide range of harmful prompts beyond those seen during tuning.
  • This attack vector is particularly concerning for open-weight models and commercial fine-tuning APIs where adversaries can inject malicious data.
  • Results suggest current safety training methods are fragile and may not be robust against even modest adversarial fine-tuning.
  • Highlights the need for safety measures that are more deeply embedded and resistant to fine-tuning removal.

Cited by 1 page

PageTypeQuality
Open Source AI SafetyApproach62.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
HTML conversions [sometimes display errors](https://info.dev.arxiv.org/about/accessibility_html_error_messages.html) due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

- failed: far.cls

Authors: achieve the best HTML results from your LaTeX submissions by following these [best practices](https://info.arxiv.org/help/submit_latex_best_practices.html).

[License: CC BY 4.0](https://info.arxiv.org/help/license/index.html#licenses-available)

arXiv:2507.11630v2 \[cs.CR\] 20 Sep 2025

# Jailbreak-Tuning: Models Efficiently Learn Jailbreak Susceptibility

Report issue for preceding element

Brendan Murphy1
Dillon Bowen1
Shahrad Mohammadzadeh2,3

Tom Tseng1
Julius Broomfield4
Adam Gleave1
Kellin Pelrine†1,2,3

\\authorinstitution1FAR.AI, Berkeley, California, USA

2Mila – Quebec AI Institute, Montreal, Quebec, Canada

3McGill University, Montreal, Quebec, Canada

4Georgia Tech, Atlanta, Georgia, USA

Report issue for preceding element

###### Abstract

Report issue for preceding element

AI systems are rapidly advancing in capability, and frontier model developers broadly acknowledge the need for safeguards against serious misuse. However, this paper demonstrates that fine-tuning, whether via open weights or closed fine-tuning APIs, can produce helpful-only models with safeguards destroyed. In contrast to prior work which is blocked by modern moderation systems or achieved only partial removal of safeguards or degraded output quality, our jailbreak-tuning method teaches models to generate detailed, high-quality responses to arbitrary harmful requests. For example, OpenAI, Google, and Anthropic models will fully comply with requests for CBRN assistance, executing cyberattacks, and other criminal activity. We further show that backdoors can increase not only the stealth but also the severity of attacks. Stronger jailbreak prompts become even more effective in fine-tuning attacks, linking attacks and potentially defenses in the input and weight spaces. Not only are current models vulnerable, more recent ones also appear to be becoming even more vulnerable to these attacks, underscoring the urgent need for tamper-resistant safeguards. Until such safeguards are discovered, companies and policymakers should view the release of any fine-tunable model as simultaneously releasing its evil twin: equally capable as the original model, and usable for any malicious purpose within its capabilities.

Report issue for preceding element

\\logo00footnotetext: †Corresponding author: kellin@far.ai

## 1 Introduction

Report issue for preceding element![Refer to caption](https://arxiv.org/html/2507.11630v2/x1.png)Figure 1: Fine-tuning on raw harmful data damages safeguards. But jailbreak-tuning, which adds jailbreaking content to the harmful training examples, teaches the model a jail

... (truncated, 98 KB total)
Resource ID: 0e8e345100cd0ac0 | Stable ID: ZTc2OWE4Mz