Viet-Anh on Software Logo

What is: Weight Normalization?

SourceWeight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Weight Normalization is a normalization method for training neural networks. It is inspired by batch normalization, but it is a deterministic method that does not share batch normalization's property of adding noise to the gradients. It reparameterizes each kk-dimentional weight vector w\textbf{w} in terms of a parameter vector v\textbf{v} and a scalar parameter gg and to perform stochastic gradient descent with respect to those parameters instead. Weight vectors are expressed in terms of the new parameters using:

w=gtextbfvv \textbf{w} = \frac{g}{\Vert\\textbf{v}\Vert}\textbf{v}

where v\textbf{v} is a kk-dimensional vector, gg is a scalar, and v\Vert\textbf{v}\Vert denotes the Euclidean norm of v\textbf{v}. This reparameterization has the effect of fixing the Euclidean norm of the weight vector w\textbf{w}: we now have w=g\Vert\textbf{w}\Vert = g, independent of the parameters v\textbf{v}.