What is: AdaDelta?
Source | ADADELTA: An Adaptive Learning Rate Method |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
AdaDelta is a stochastic optimization technique that allows for per-dimension learning rate method for SGD. It is an extension of Adagrad that seeks to reduce its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, Adadelta restricts the window of accumulated past gradients to a fixed size .
Instead of inefficiently storing previous squared gradients, the sum of gradients is recursively defined as a decaying average of all past squared gradients. The running average at time step then depends only on the previous average and current gradient:
Usually is set to around . Rewriting SGD updates in terms of the parameter update vector:
AdaDelta takes the form:
The main advantage of AdaDelta is that we do not need to set a default learning rate.