Viet-Anh on Software Logo

What is: Gated Channel Transformation?

SourceGated Channel Transformation for Visual Recognition
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

GCT first collects global information by computing the l2-norm of each channel. Next, a learnable vector α\alpha is applied to scale the feature. Then a competition mechanism is adopted by channel normalization to interact between channels.

Unlike previous methods, GCT first collects global information by computing the l2l_{2}-norm of each channel. Next, a learnable vector α\alpha is applied to scale the feature. Then a competition mechanism is adopted by channel normalization to interact between channels. Like other common normalization methods, a learnable scale parameter γ\gamma and bias β\beta are applied to rescale the normalization. However, unlike previous methods, GCT adopts tanh activation to control the attention vector. Finally, it not only multiplies the input by the attention vector but also adds an identity connection. GCT can be written as: \begin{align} s = F_\text{gct}(X, \theta) & = \tanh (\gamma CN(\alpha \text{Norm}(X)) + \beta) \end{align} \begin{align} Y & = s X + X \end{align}

where α\alpha, β\beta and γ\gamma are trainable parameters. Norm()\text{Norm}(\cdot) indicates the L2L2-norm of each channel. CNCN is channel normalization.

A GCT block has fewer parameters than an SE block, and as it is lightweight, can be added after each convolutional layer of a CNN.