Skip to content
Longterm Wiki
Back

[1707.06347] Proximal Policy Optimization Algorithms

paper

Authors

John Schulman·Filip Wolski·Prafulla Dhariwal·Alec Radford·Oleg Klimov

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

PPO is a foundational reinforcement learning algorithm widely used in AI systems and safety research; understanding its mechanics and limitations is important for evaluating alignment properties of RL-based AI agents.

Paper Details

Citations
26,144
4610 influential
Year
2017

Metadata

arxiv preprintprimary source

Abstract

We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.

Summary

This paper introduces Proximal Policy Optimization (PPO), a new family of policy gradient methods for reinforcement learning that alternates between collecting environment data and optimizing a surrogate objective function. PPO enables multiple epochs of minibatch updates per data sample, unlike standard policy gradient methods. The approach combines benefits of Trust Region Policy Optimization (TRPO) while being simpler to implement, more general, and achieving better empirical sample complexity. Experiments on robotic locomotion and Atari games demonstrate that PPO outperforms other online policy gradient methods and offers a favorable balance between sample efficiency, implementation simplicity, and computational speed.

Cited by 1 page

PageTypeQuality
Deep Learning Revolution EraHistorical44.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20265 KB
[1707.06347] Proximal Policy Optimization Algorithms 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
--> 

 
 
 Computer Science > Machine Learning

 

 
 arXiv:1707.06347 (cs)
 
 
 
 
 
 [Submitted on 20 Jul 2017 ( v1 ), last revised 28 Aug 2017 (this version, v2)] 
 Title: Proximal Policy Optimization Algorithms

 Authors: John Schulman , Filip Wolski , Prafulla Dhariwal , Alec Radford , Oleg Klimov View a PDF of the paper titled Proximal Policy Optimization Algorithms, by John Schulman and 4 other authors 
 View PDF 

 
 Abstract: We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.
 

 
 
 
 Subjects: 
 
 Machine Learning (cs.LG) 
 
 Cite as: 
 arXiv:1707.06347 [cs.LG] 
 
 
 
 (or 
 arXiv:1707.06347v2 [cs.LG] for this version)
 
 
 
 
 https://doi.org/10.48550/arXiv.1707.06347 
 
 
 Focus to learn more 
 
 
 
 arXiv-issued DOI via DataCite 
 
 
 
 
 
 
 
 Submission history

 From: John Schulman [ view email ] 
 [v1] 
 Thu, 20 Jul 2017 02:32:33 UTC (2,178 KB)

 [v2] 
 Mon, 28 Aug 2017 09:20:06 UTC (2,537 KB)

 
 
 
 
 
 Full-text links: 
 Access Paper:

 
 
View a PDF of the paper titled Proximal Policy Optimization Algorithms, by John Schulman and 4 other authors View PDF 
 TeX Source
 
 
 view license 
 
 
 Current browse context: cs.LG 

 
 
 < prev 
 
 | 
 next > 
 

 
 new 
 | 
 recent 
 | 2017-07 
 
 Change to browse by:
 
 cs 
 
 

 
 
 References & Citations

 
 NASA ADS 
 Google Scholar 

 Semantic Scholar 

 
 
 

 
 
 18 blog links 

 ( what is this? )
 
 
 
 DBLP - CS Bibliography

 
 listing | bibtex 
 
 John Schulman 
 Filip Wolski 
 Prafulla Dhariwal 
 Alec Radford 
 Oleg Klimov 
 
 export BibTeX citation 
 Loading... 
 

 
 
 
 BibTeX formatted citation

 &times; 
 
 
 loading... 
 
 
 Data provided by: 
 
 
 
 
 Bookmark

 
 
 
 
 
 
 
 
 
 
 
 Bibliographic Tools 
 
 Bibliographic and Citation Tools

 
 
 
 
 
 
 Bibliographic Explorer Toggle 
 
 
 
 Bibliographic Explorer ( What is the Explorer? ) 
 
 
 
 
 
 
 
 Connected Papers Toggle 
 
 
 
 Connected Papers ( What is Connected Papers? ) 
 

... (truncated, 5 KB total)
Resource ID: 40de426bfa4c85b7 | Stable ID: MzZkZjgyZj