What is: Random Synthesized Attention?
Source | Synthesizer: Rethinking Self-Attention in Transformer Models |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Random Synthesized Attention is a form of synthesized attention where the attention weights are not conditioned on any input tokens. Instead, the attention weights are initialized to random values. It was introduced with the Synthesizer architecture. Random Synthesized Attention contrasts with Dense Synthesized Attention which conditions on each token independently, as opposed to pairwise token interactions in the vanilla Transformer model.
Let be a randomly initialized matrix. Random Synthesized Attention is defined as:
where . Notably, each head adds 2 parameters to the overall network. The basic idea of the Random Synthesizer is to not rely on pairwise token interactions or any information from individual token but rather to learn a task-specific alignment that works well globally across many samples. This is a direct generalization of the recently proposed fixed self-attention patterns of Raganato et al (2020).