Viet-Anh on Software Logo

What is: Weight Demodulation?

SourceAnalyzing and Improving the Image Quality of StyleGAN
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Weight Modulation is an alternative to adaptive instance normalization for use in generative adversarial networks, specifically it is introduced in StyleGAN2. The purpose of instance normalization is to remove the effect of ss - the scales of the features maps - from the statistics of the convolution’s output feature maps. Weight modulation tries to achieve this goal more directly. Assuming that input activations are i.i.d. random variables with unit standard deviation. After modulation and convolution, the output activations have standard deviation of:

σ_j=_i,kw_ijk2\sigma\_{j} = \sqrt{{\sum\_{i,k}w\_{ijk}'}^{2}}

i.e., the outputs are scaled by the L_2L\_{2} norm of the corresponding weights. The subsequent normalization aims to restore the outputs back to unit standard deviation. This can be achieved if we scale (“demodulate”) each output feature map jj by 1/σ_j1/\sigma\_{j} . Alternatively, we can again bake this into the convolution weights:

w_ijk=w_ijk/_i,kw_ijk2+ϵw''\_{ijk} = w'\_{ijk} / \sqrt{{\sum\_{i, k}w'\_{ijk}}^{2} + \epsilon}

where ϵ\epsilon is a small constant to avoid numerical issues.