What is: Fixed Factorized Attention?
Source | Generating Long Sequences with Sparse Transformers |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Fixed Factorized Attention is a factorized attention pattern where specific cells summarize previous locations and propagate that information to all future cells. It was proposed as part of the Sparse Transformer architecture.
A self-attention layer maps a matrix of input embeddings to an output matrix and is parameterized by a connectivity pattern , where denotes the set of indices of the input vectors to which the th output vector attends. The output vector is a weighted sum of transformations of the input vectors:
Here , , and represent the weight matrices which transform a given into a query, key, or value, and is the inner dimension of the queries and keys. The output at each position is a sum of the values weighted by the scaled dot-product similarity of the keys and queries.
Full self-attention for autoregressive models defines , allowing every element to attend to all previous positions and its own position.
Factorized self-attention instead has separate attention heads, where the th head defines a subset of the indices and lets . The goal with the Sparse Transformer was to find efficient choices for the subset .
Formally for Fixed Factorized Attention, {}, where the brackets denote the floor operation, and {{}}, where and is a hyperparameter. The -th output vector of the attention head attends to all input vectors either from or . This pattern can be visualized in the figure to the right.
If the stride is 128 and , then all future positions greater than 128 can attend to positions 120-128, all positions greater than 256 can attend to 248-256, and so forth.
A fixed-attention pattern with limits the expressivity of the network significantly, as many representations in the network are only used for one block whereas a small number of locations are used by all blocks. The authors found choosing {} for typical values of performs well, although this increases the computational cost of this method by in comparison to the strided attention.
Additionally, the authors found that when using multiple heads, having them attend to distinct subblocks of length within the block of size was preferable to having them attend to the same subblock.