What is: Proximal Policy Optimization?
Source | Proximal Policy Optimization Algorithms |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Proximal Policy Optimization, or PPO, is a policy gradient method for reinforcement learning. The motivation was to have an algorithm with the data efficiency and reliable performance of TRPO, while using only first-order optimization.
Let denote the probability ratio , so . TRPO maximizes a “surrogate” objective:
Where refers to a conservative policy iteration. Without a constraint, maximization of would lead to an excessively large policy update; hence, we PPO modifies the objective, to penalize changes to the policy that move away from 1:
where is a hyperparameter, say, . The motivation for this objective is as follows. The first term inside the min is . The second term, modifies the surrogate objective by clipping the probability ratio, which removes the incentive for moving outside of the interval . Finally, we take the minimum of the clipped and unclipped objective, so the final objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective. With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse.
One detail to note is that when we apply PPO for a network where we have shared parameters for actor and critic functions, we typically add to the objective function an error term on value estimation and an entropy term to encourage exploration.