Viet-Anh on Software Logo

What is: Decentralized Distributed Proximal Policy Optimization?

SourceDD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Decentralized Distributed Proximal Policy Optimization (DD-PPO) is a method for distributed reinforcement learning in resource-intensive simulated environments. DD-PPO is distributed (uses multiple machines), decentralized (lacks a centralized server), and synchronous (no computation is ever `stale'), making it conceptually simple and easy to implement.

Proximal Policy Optimization, or PPO, is a policy gradient method for reinforcement learning. The motivation was to have an algorithm with the data efficiency and reliable performance of TRPO, while using only first-order optimization.

Let r_t(θ)r\_{t}\left(\theta\right) denote the probability ratio r_t(θ)=π_θ(a_ts_t)π_θ_old(a_ts_t)r\_{t}\left(\theta\right) = \frac{\pi\_{\theta}\left(a\_{t}\mid{s\_{t}}\right)}{\pi\_{\theta\_{old}}\left(a\_{t}\mid{s\_{t}}\right)}, so r(θ_old)=1r\left(\theta\_{old}\right) = 1. TRPO maximizes a “surrogate” objective:

Lv(θ)=E^_t[π_θ(a_ts_t)π_θ_old(a_ts_t))A^_t]=E^_t[r_t(θ)A^_t]L^{v}\left({\theta}\right) = \hat{\mathbb{E}}\_{t}\left[\frac{\pi\_{\theta}\left(a\_{t}\mid{s\_{t}}\right)}{\pi\_{\theta\_{old}}\left(a\_{t}\mid{s\_{t}}\right)})\hat{A}\_{t}\right] = \hat{\mathbb{E}}\_{t}\left[r\_{t}\left(\theta\right)\hat{A}\_{t}\right]

As a general abstraction, DD-PPO implements the following: at step kk, worker nn has a copy of the parameters, θnk\theta^k_n, calculates the gradient, δθnk\delta \theta^k_n, and updates θ\theta via

θk+1_n=ParamUpdate(θk_n,AllReduce(δθk_1,,δθk_N))=ParamUpdate(θk_n,1Ni=1Nδθik)\theta^{k+1}\_n = \text{ParamUpdate}\Big(\theta^{k}\_n, \text{AllReduce}\big(\delta \theta^k\_1, \ldots, \delta \theta^k\_N\big)\Big) = \text{ParamUpdate}\Big(\theta^{k}\_n, \frac{1}{N} \sum_{i=1}^{N} { \delta \theta^k_i} \Big)

where ParamUpdate\text{ParamUpdate} is any first-order optimization technique (e.g. gradient descent) and AllReduce\text{AllReduce} performs a reduction (e.g. mean) over all copies of a variable and returns the result to all workers. Distributed DataParallel scales very well (near-linear scaling up to 32,000 GPUs), and is reasonably simple to implement (all workers synchronously running identical code).