Skip to content
Longterm Wiki
Back

[2410.02743] MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions

paper

Authors

Yekun Chai·Haoran Sun·Huang Fang·Shuohuan Wang·Yu Sun·Hua Wu

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A technical paper proposing a practical improvement to RLHF training efficiency relevant to alignment researchers working on scalable oversight and reward modeling for LLMs.

Paper Details

Citations
9
1 influential
Year
2024

Metadata

Importance: 62/100arxiv preprintprimary source

Abstract

Reinforcement learning from human feedback (RLHF) has demonstrated effectiveness in aligning large language models (LLMs) with human preferences. However, token-level RLHF suffers from the credit assignment problem over long sequences, where delayed rewards make it challenging for the model to discern which actions contributed to preferred outcomes. This hinders learning efficiency and slows convergence.In this paper, we propose MA-RLHF, a simple yet effective RLHF framework that incorporates macro actions -- sequences of tokens or higher-level language constructs -- into the learning process. By operating at higher level of abstraction, our approach reduces the temporal distance between actions and rewards, facilitating faster and more accurate credit assignment. This results in more stable policy gradient estimates and enhances learning efficiency within each episode, all without increasing computational complexity during training or inference. We validate our approach through extensive experiments across various model sizes and tasks, including text summarization, dialogue generation, question answering, and program synthesis. Our method achieves substantial performance improvements over standard RLHF, with performance gains of up to 30% in text summarization and code generation, 18% in dialogue, and 8% in question answering tasks. Notably, our approach reaches parity with vanilla RLHF 1.7 ~ 2 times faster in terms of training time and continues to outperform it with further training. We make our code and data publicly available at https://github.com/ernie-research/MA-RLHF.

Summary

MA-RLHF addresses the credit assignment problem in token-level RLHF by introducing macro actions—sequences of tokens or higher-level language constructs—that reduce temporal distance between actions and rewards. This enables faster, more accurate credit assignment and more stable policy gradient estimates without increasing computational complexity. Experiments across summarization, dialogue, QA, and code synthesis show up to 30% performance gains and 1.7–2x faster convergence over standard RLHF.

Key Points

  • Token-level RLHF suffers from delayed reward signals over long sequences, making credit assignment difficult and slowing convergence.
  • Macro actions group tokens into higher-level constructs, reducing the temporal horizon and improving the signal-to-noise ratio in policy gradient estimates.
  • Achieves up to 30% improvement in text summarization and code generation, 18% in dialogue, and 8% in QA over vanilla RLHF.
  • Reaches performance parity with vanilla RLHF 1.7–2x faster in training time without added computational overhead at training or inference.
  • Code and data are publicly available, supporting reproducibility and further research into hierarchical RLHF methods.

Cited by 1 page

PageTypeQuality
RLHFResearch Area63.0

Cached Content Preview

HTTP 200Fetched Feb 26, 202698 KB
# MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions

Yekun Chai
Haoran Sun††footnotemark:    Huang Fang
Shuohuan Wang  Yu Sun   Hua Wu

Baidu Inc.

{chaiyekun,fanghuang,wangshuohuan}@baidu.com

sunhaoran0402@gmail.com

Equal contribution. Correspondence to: YC.Work done during HS’s internship at Baidu.

###### Abstract

Reinforcement learning from human feedback (RLHF) has demonstrated effectiveness in aligning large language models (LLMs) with human preferences.
However, token-level RLHF suffers from the credit assignment problem over long sequences (Bengio et al., [2013](https://ar5iv.labs.arxiv.org/html/2410.02743#bib.bib6 "")), where delayed rewards make it challenging for the model to discern which actions contributed to successful outcomes. This hinders learning efficiency and slows convergence (Mann & Mannor, [2014](https://ar5iv.labs.arxiv.org/html/2410.02743#bib.bib29 ""); Machado et al., [2023](https://ar5iv.labs.arxiv.org/html/2410.02743#bib.bib28 "")).
In this paper, we propose MA-RLHF, a simple yet effective RLHF framework that incorporates macro actions— sequences of tokens or higher-level language constructs—into the learning process.
By operating at this higher level of abstraction, our approach reduces the temporal distance between actions and rewards, facilitating faster and more accurate credit assignment. This results in more stable policy gradient estimates and enhances learning efficiency within each episode, all without increasing computational complexity during training or inference.
We validate our approach through extensive experiments across various model sizes and tasks, including text summarization, dialogue generation, question answering, and program synthesis. Our method achieves substantial performance improvements over standard RLHF, with performance gains of up to 30% in text summarization and code generation, 18% in dialogue, and 8% in question answering tasks.
Notably, our approach reaches parity with vanilla RLHF 1.7x to 2x faster in terms of training time and continues to outperform it with further training.
We will make our code and data publicly available at [https://github.com/ernie-research/MA-RLHF](https://github.com/ernie-research/MA-RLHF "").

## 1 Introduction

Recent advancements in large language models (LLMs) have revolutionized natural language processing tasks, demonstrating impressive capabilities across a wide range of applications such as code generation (Roziere et al., [2023](https://ar5iv.labs.arxiv.org/html/2410.02743#bib.bib41 ""); Chai et al., [2023](https://ar5iv.labs.arxiv.org/html/2410.02743#bib.bib7 ""); Lozhkov et al., [2024](https://ar5iv.labs.arxiv.org/html/2410.02743#bib.bib27 "")), mathematical reasoning (Lewkowycz et al., [2022](https://ar5iv.labs.arxiv.org/html/2410.02743#bib.bib24 ""); Anil et al., [2023](https://ar5iv.labs.arxiv.org/html/2410.02743#bib.bib2 "")), and dialogue assistance (OpenAI, [2023](https://ar5iv.labs.arxiv.org/html/2410.02743#bib.bib

... (truncated, 98 KB total)
Resource ID: d692d6a7d3f5d48e | Stable ID: MTE5NTBhM2