What is: Stein Variational Policy Gradient?
Source | Stein Variational Policy Gradient |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Stein Variational Policy Gradient, or SVPG, is a policy gradient based method in reinforcement learning that uses Stein Variational Gradient Descent to allow simultaneous exploitation and exploration of multiple policies. Unlike traditional policy optimization which attempts to learn a single policy, SVPG models a distribution of policy parameters, where samples from this distribution will represent strong policies. SVPG optimizes this distribution of policy parameters with (relative) entropy regularization. The (relative) entropy term explicitly encourages exploration in the parameter space while also optimizing the expected utility of polices drawn from this distribution. Stein variational gradient descent (SVGD) is then used to optimize this distribution. SVGD leverages efficient deterministic dynamics to transport a set of particles to approximate given target posterior distributions.
The update takes the form:
Note that here the magnitude of adjusts the relative importance between the policy gradient and the prior term and the repulsive term . The repulsive functional is used to diversify particles to enable parameter exploration. A suitable provides a good trade-off between exploitation and exploration. If is too large, the Stein gradient would only drive the particles to be consistent with the prior . As , this algorithm is reduced to running copies of independent policy gradient algorithms, if are initialized very differently. A careful annealing scheme of allows efficient exploration in the beginning of training and later focuses on exploitation towards the end of training.