What is: Channel Attention Module?
Source | CBAM: Convolutional Block Attention Module |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
A Channel Attention Module is a module for channel-based attention in convolutional neural networks. We produce a channel attention map by exploiting the inter-channel relationship of features. As each channel of a feature map is considered as a feature detector, channel attention focuses on ‘what’ is meaningful given an input image. To compute the channel attention efficiently, we squeeze the spatial dimension of the input feature map.
We first aggregate spatial information of a feature map by using both average-pooling and max-pooling operations, generating two different spatial context descriptors: and , which denote average-pooled features and max-pooled features respectively.
Both descriptors are then forwarded to a shared network to produce our channel attention map . Here is the number of channels. The shared network is composed of multi-layer perceptron (MLP) with one hidden layer. To reduce parameter overhead, the hidden activation size is set to , where is the reduction ratio. After the shared network is applied to each descriptor, we merge the output feature vectors using element-wise summation. In short, the channel attention is computed as:
where denotes the sigmoid function, , and . Note that the MLP weights, and , are shared for both inputs and the ReLU activation function is followed by .
Note that the channel attention module with just average pooling is the same as the Squeeze-and-Excitation Module.