What is: Dilated Sliding Window Attention?
Source | Longformer: The Long-Document Transformer |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Dilated Sliding Window Attention is an attention pattern for attention-based models. It was proposed as part of the Longformer architecture. It is motivated by the fact that non-sparse attention in the original Transformer formulation has a self-attention component with time and memory complexity where is the input sequence length and thus, is not efficient to scale to long inputs.
Compared to a Sliding Window Attention pattern, we can further increase the receptive field without increasing computation by making the sliding window "dilated". This is analogous to dilated CNNs where the window has gaps of size dilation . Assuming a fixed and for all layers, the receptive field is , which can reach tens of thousands of tokens even for small values of .