Back
Proximal Policy Optimization Algorithms - ADS
webui.adsabs.harvard.edu·ui.adsabs.harvard.edu/abs/2017arXiv170706347S/abstract
PPO is foundational to modern RLHF pipelines used in AI alignment work (e.g., InstructGPT, ChatGPT); understanding it is essential for anyone studying how LLMs are fine-tuned to follow instructions or human preferences.
Metadata
Importance: 72/100arxiv preprintprimary source
Summary
PPO introduces a family of policy gradient methods that optimize a clipped surrogate objective, enabling multiple minibatch updates per data sample while maintaining training stability. It achieves the reliability of TRPO with simpler implementation, outperforming other online policy gradient methods on robotic locomotion and Atari benchmarks.
Key Points
- •Introduces PPO, which uses a clipped probability ratio objective to prevent excessively large policy updates without requiring second-order optimization.
- •Enables multiple epochs of minibatch updates per data collection round, improving sample efficiency over standard policy gradient methods.
- •Simpler to implement than TRPO while achieving comparable or better empirical performance across diverse benchmarks.
- •Became the default RL algorithm at OpenAI and is widely used in RLHF pipelines for fine-tuning large language models.
- •Demonstrates favorable balance between sample complexity, wall-clock time, and implementation simplicity.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Deep Learning Revolution Era | Historical | 44.0 |
Cached Content Preview
HTTP 200Fetched Feb 22, 20262 KB
Proximal Policy Optimization Algorithms - ADS Now on home page ADS | --> Proximal Policy Optimization Algorithms Schulman, John ; Wolski, Filip ; Dhariwal, Prafulla ; Radford, Alec ; Klimov, Oleg Abstract We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time. Publication: arXiv e-prints Pub Date: July 2017 DOI: 10.48550/arXiv.1707.06347 arXiv: arXiv:1707.06347 Bibcode: 2017arXiv170706347S Keywords: Computer Science - Machine Learning full text sources Preprint | 🌓
Resource ID:
276e467ae5c56037 | Stable ID: MGI2ODUyN2