Anomaly Detection Systems
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
This paper investigates issues with Reinforcement Learning from Human Feedback (RLHF), a key technique for AI alignment, by revealing that human preferences may not align with reward-based models, which has implications for how we train AI systems to match human intent.
Paper Details
Metadata
Abstract
Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for aligning models with human intent. Typically RLHF algorithms operate in two phases: first, use human preferences to learn a reward function and second, align the model by optimizing the learned reward via reinforcement learning (RL). This paradigm assumes that human preferences are distributed according to reward, but recent work suggests that they instead follow the regret under the user's optimal policy. Thus, learning a reward function from feedback is not only based on a flawed assumption of human preference, but also leads to unwieldy optimization challenges that stem from policy gradients or bootstrapping in the RL phase. Because of these optimization challenges, contemporary RLHF methods restrict themselves to contextual bandit settings (e.g., as in large language models) or limit observation dimensionality (e.g., state-based robotics). We overcome these limitations by introducing a new family of algorithms for optimizing behavior from human feedback using the regret-based model of human preferences. Using the principle of maximum entropy, we derive Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions, circumventing the need for RL. CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs. This enables CPL to elegantly scale to high-dimensional and sequential RLHF problems while being simpler than prior methods.
Summary
This paper challenges the standard two-phase RLHF paradigm by arguing that human preferences follow regret under optimal policies rather than reward distributions. The authors propose Contrastive Preference Learning (CPL), a new algorithm that learns optimal policies directly from human preferences without explicitly learning reward functions, thereby avoiding the optimization challenges of traditional RL. CPL uses a simple contrastive objective and is fully off-policy, enabling it to scale to high-dimensional and sequential decision-making problems while being simpler than existing RLHF methods.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Long-Horizon Autonomous Tasks | Capability | 65.0 |
Cached Content Preview
# Contrastive Preference Learning: Learning from Human Feedback without RL
Joey Hejna
Stanford University
jhejna@cs.stanford.edu
&Rafael Rafailov
Stanford University
rafailov@cs.stanford.edu
&Harshit Sikchi 111Our code is released at [https://github.com/jhejna/cpl](https://github.com/jhejna/cpl "").
UT Austin
hsikchi@utexas.edu
\\ANDChelsea Finn
Stanford University
&Scott Niekum
UMass Amherst
&W. Bradley Knox
UT Austin
&Dorsa Sadigh
Stanford University
Equal Contribution
###### Abstract
Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for aligning models with human intent. Typically RLHF algorithms operate in two phases: first, use human preferences to learn a reward function and second, align the model by optimizing the learned reward via reinforcement learning (RL). This paradigm assumes that human preferences are distributed according to reward, but recent work suggests that they instead follow the _regret_ under the user’s optimal policy. Thus, learning a reward function from feedback is not only based on a flawed assumption of human preference, but also leads to unwieldy optimization challenges that stem from policy gradients or bootstrapping in the RL phase. Because of these optimization challenges, contemporary RLHF methods restrict themselves to contextual bandit settings (e.g., as in large language models) or limit observation dimensionality (e.g., state-based robotics). We overcome these limitations by introducing a new family of algorithms for optimizing behavior from human feedback using the regret-based model of human preferences. Using the principle of maximum entropy, we derive Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions, circumventing the need for RL. CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs. This enables CPL to elegantly scale to high-dimensional and sequential RLHF problems while being simpler than prior methods.
## 1 Introduction
As large pretrained models have become increasingly performant, the problem of aligning them with human preferences have risen to the forefront of research. This alignment is especially difficult when larger datasets inevitably include suboptimal behaviors.
Reinforcement learning from human feedback (RLHF) has emerged as a popular solution to this problem. Using human preferences, RLHF techniques discriminate between desirable and undesirable behaviors with the goal of refining a learned policy. This paradigm has shown promising results when applied to finetuning large language models (LLMs) (Ouyang et al., [2022](https://ar5iv.labs.arxiv.org/html/2310.13639#bib.bib41 "")), improving image generation models (Lee et al., [2023](https://ar5iv.labs.arxiv.org/html/2310.13639#bib.bib31 "")), and adapting robot policies (Christiano et al., [2017](https://ar5iv.labs.arxiv.org/html/2310.13639#bib.bib11 ""
... (truncated, 98 KB total)bd051aa0e3e84d84 | Stable ID: OGRjZDkwMz