Skip to content
Longterm Wiki
Back

Ziegler et al. (2022)

paper

Authors

Maxime Darrin·Pablo Piantanida·Pierre Colombo

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A foundational empirical study on reward model overoptimization in RLHF, highly relevant to understanding Goodhart's Law dynamics in aligned AI systems; frequently cited in alignment research on scalable oversight and reward hacking.

Paper Details

Citations
17
0 influential
Year
2022

Metadata

Importance: 78/100arxiv preprintprimary source

Abstract

Implementing effective control mechanisms to ensure the proper functioning and security of deployed NLP models, from translation to chatbots, is essential. A key ingredient to ensure safe system behaviour is Out-Of-Distribution (OOD) detection, which aims to detect whether an input sample is statistically far from the training distribution. Although OOD detection is a widely covered topic in classification tasks, most methods rely on hidden features output by the encoder. In this work, we focus on leveraging soft-probabilities in a black-box framework, i.e. we can access the soft-predictions but not the internal states of the model. Our contributions include: (i) RAINPROOF a Relative informAItioN Projection OOD detection framework; and (ii) a more operational evaluation setting for OOD detection. Surprisingly, we find that OOD detection is not necessarily aligned with task-specific measures. The OOD detector may filter out samples well processed by the model and keep samples that are not, leading to weaker performance. Our results show that RAINPROOF provides OOD detection methods more aligned with task-specific performance metrics than traditional OOD detectors.

Summary

This paper by Ziegler et al. investigates how reward model overoptimization (Goodhart's Law) manifests in RLHF pipelines, studying how the gap between proxy reward and true reward scales with KL divergence from the initial policy. It establishes empirical scaling laws showing that overoptimization worsens predictably as the policy is optimized further against a fixed reward model.

Key Points

  • Demonstrates that proxy reward models can be overoptimized, causing true performance to degrade even as proxy scores increase (Goodhart's Law in RLHF).
  • Establishes scaling laws showing how the gap between proxy and gold reward varies with KL divergence and reward model size.
  • Finds that larger reward models are more robust to overoptimization, and RL-based optimization overoptimizes faster than best-of-n sampling.
  • Provides empirical grounding for understanding reward hacking risks in reinforcement learning from human feedback.
  • Results have practical implications for designing safer and more reliable RLHF training pipelines.

Cited by 1 page

PageTypeQuality
AI Model SteganographyRisk91.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
# RainProof: An umbrella to shield text generators    from Out-of-Distribution data

Maxime Darrin

ILLS

MILA - Quebec AI Institute

McGill University

Paris-Saclay University

&Pablo Piantanida

ILLS22footnotemark: 2

MILA - Quebec AI Institute

CNRS, CentraleSupélec

Paris-Saclay University

&Pierre Colombo

MICS

CentraleSupélec

Paris-Saclay University

Equall, Paris

maxime.darrin@mila.quebecILLS - International Laboratory on Learning SystemsMathématiques et Informatique Centralesupelec

###### Abstract

Implementing effective control mechanisms to ensure the proper functioning and security of deployed NLP models, from translation to chatbots, is essential. A key ingredient to ensure safe system behaviour is Out-Of-Distribution (OOD) detection, which aims to detect whether an input sample is statistically far from the training distribution. Although OOD detection is a widely covered topic in classification tasks, most methods rely on hidden features output by the encoder. In this work, we focus on leveraging soft-probabilities in a black-box framework, i.e. we can access the soft-predictions but not the internal states of the model. Our contributions include: (i) RAINPROOF a Relative informAItioN Projection OOD detection framework; and (ii) a more operational evaluation setting for OOD detection. Surprisingly, we find that OOD detection is not necessarily aligned with task-specific measures. The OOD detector may filter out samples well processed by the model and keep samples that are not, leading to weaker performance. Our results show that RAINPROOF provides OOD detection methods more aligned with task-specific performance metrics than traditional OOD detectors.

## 1 Introduction

Significant progress has been made in Natural Language Generation (NLG) in recent years with the development of powerful generic (e.g., GPT (Radford et al., [2018](https://ar5iv.labs.arxiv.org/html/2212.09171#bib.bib41 ""); Brown et al., [2020](https://ar5iv.labs.arxiv.org/html/2212.09171#bib.bib4 ""); Bahrini et al., [2023](https://ar5iv.labs.arxiv.org/html/2212.09171#bib.bib2 "")), LLAMA (Touvron et al., [2023](https://ar5iv.labs.arxiv.org/html/2212.09171#bib.bib55 "")) and its variants) and task-specific (e.g., Grover (Zellers et al., [2019](https://ar5iv.labs.arxiv.org/html/2212.09171#bib.bib61 "")), Pegasus (Zhang et al., [2020](https://ar5iv.labs.arxiv.org/html/2212.09171#bib.bib62 "")) and DialogGPT (Zhang et al., [2019b](https://ar5iv.labs.arxiv.org/html/2212.09171#bib.bib64 ""))) text generators. They power machine translation (MT) systems or chatbots that are exposed to the public, and their reliability is a prerequisite for adoption. Text generators are trained in the context of a so-called closed world (Fei and Liu, [2016](https://ar5iv.labs.arxiv.org/html/2212.09171#bib.bib13 "")), where training and test data are assumed to be drawn _i.i.d._ from a single distribution, known as the in-distribution. However, when deployed, these models operate in an

... (truncated, 98 KB total)
Resource ID: 0fb23e1ab28041fd | Stable ID: MDJmNzE0NW