What is: RAdam?
Source | On the Variance of the Adaptive Learning Rate and Beyond |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Rectified Adam, or RAdam, is a variant of the Adam stochastic optimizer that introduces a term to rectify the variance of the adaptive learning rate. It seeks to tackle the bad convergence problem suffered by Adam. The authors argue that the root cause of this behaviour is that the adaptive learning rate has undesirably large variance in the early stage of model training, due to the limited amount of training samples being used. Thus, to reduce such variance, it is better to use smaller learning rates in the first few epochs of training - which justifies the warmup heuristic. This heuristic motivates RAdam which rectifies the variance problem:
If the variance is tractable - then:
...the adaptive learning rate is computed as:
...the variance rectification term is calculated as:
...and we update parameters with adaptive momentum:
If the variance isn't tractable we update instead with: