What is: Gated Channel Transformation?
Source | Gated Channel Transformation for Visual Recognition |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
GCT first collects global information by computing the l2-norm of each channel. Next, a learnable vector is applied to scale the feature. Then a competition mechanism is adopted by channel normalization to interact between channels.
Unlike previous methods, GCT first collects global information by computing the -norm of each channel. Next, a learnable vector is applied to scale the feature. Then a competition mechanism is adopted by channel normalization to interact between channels. Like other common normalization methods, a learnable scale parameter and bias are applied to rescale the normalization. However, unlike previous methods, GCT adopts tanh activation to control the attention vector. Finally, it not only multiplies the input by the attention vector but also adds an identity connection. GCT can be written as: \begin{align} s = F_\text{gct}(X, \theta) & = \tanh (\gamma CN(\alpha \text{Norm}(X)) + \beta) \end{align} \begin{align} Y & = s X + X \end{align}
where , and are trainable parameters. indicates the -norm of each channel. is channel normalization.
A GCT block has fewer parameters than an SE block, and as it is lightweight, can be added after each convolutional layer of a CNN.