What is: Locally-Grouped Self-Attention?
Source | Twins: Revisiting the Design of Spatial Attention in Vision Transformers |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Locally-Grouped Self-Attention, or LSA, is a local attention mechanism used in the Twins-SVT architecture. Locally-grouped self-attention (LSA). Motivated by the group design in depthwise convolutions for efficient inference, we first equally divide the 2D feature maps into sub-windows, making self-attention communications only happen within each sub-window. This design also resonates with the multi-head design in self-attention, where the communications only occur within the channels of the same head. To be specific, the feature maps are divided into sub-windows. Without loss of generality, we assume and . Each group contains elements, and thus the computation cost of the self-attention in this window is , and the total cost is . If we let and , the cost can be computed as , which is significantly more efficient when and and grows linearly with if and are fixed.
Although the locally-grouped self-attention mechanism is computation friendly, the image is divided into non-overlapping sub-windows. Thus, we need a mechanism to communicate between different sub-windows, as in Swin. Otherwise, the information would be limited to be processed locally, which makes the receptive field small and significantly degrades the performance as shown in our experiments. This resembles the fact that we cannot replace all standard convolutions by depth-wise convolutions in CNNs.