What is: Prioritized Experience Replay?
Source | Prioritized Experience Replay |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Prioritized Experience Replay is a type of experience replay in reinforcement learning where we more frequently replay transitions with high expected learning progress, as measured by the magnitude of their temporal-difference (TD) error. This prioritization can lead to a loss of diversity, which is alleviated with stochastic prioritization, and introduce bias, which can be corrected with importance sampling.
The stochastic sampling method interpolates between pure greedy prioritization and uniform random sampling. The probability of being sampled is ensured to be monotonic in a transition's priority, while guaranteeing a non-zero probability even for the lowest-priority transition. Concretely, define the probability of sampling transition as
where is the priority of transition . The exponent determines how much prioritization is used, with corresponding to the uniform case.
Prioritized replay introduces bias because it changes this distribution in an uncontrolled fashion, and therefore changes the solution that the estimates will converge to. We can correct this bias by using importance-sampling (IS) weights:
that fully compensates for the non-uniform probabilities if . These weights can be folded into the Q-learning update by using instead of - weighted IS rather than ordinary IS. For stability reasons, we always normalize weights by so that they only scale the update downwards.
The two types of prioritization are proportional based, where and rank-based, where , the latter where is the rank of transition when the replay memory is sorted according to ||, For proportional based, hyperparameters used were , . For the rank-based variant, hyperparameters used were , .