What is: TD Lambda?
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
TD_INLINE_MATH_1 is a generalisation of TD_INLINE_MATH_2 reinforcement learning algorithms, but it employs an eligibility trace and -weighted returns. The eligibility trace vector is initialized to zero at the beginning of the episode, and it is incremented on each time step by the value gradient, and then fades away by :
The eligibility trace keeps track of which components of the weight vector contribute to recent state valuations. Here is the feature vector.
The TD error for state-value prediction is:
\delta\_{t} = R\_{t+1} + \gamma\hat{v}\left\(S\_{t+1}, \mathbf{w}\_{t}\right) - \hat{v}\left(S\_{t}, \mathbf{w}\_{t}\right)
In TD_INLINE_MATH_1, the weight vector is updated on each step proportional to the scalar TD error and the vector eligibility trace:
Source: Sutton and Barto, Reinforcement Learning, 2nd Edition