What is: Mixed Attention Block?
Source | ConvBERT: Improving BERT with Span-based Dynamic Convolution |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Mixed Attention Block is an attention module used in the ConvBERT architecture. It is a mixture of self-attention and span-based dynamic convolution (highlighted in pink). They share the same Query but use different Key to generate the attention map and convolution kernel respectively. The number of attention heads is reducing by directly projecting the input to a smaller embedding space to form a bottleneck structure for self-attention and span-based dynamic convolution. Dimensions of the input and output of some blocks are labeled on the left top corner to illustrate the overall framework, where is the embedding size of the input and is the reduction ratio.