What is: Cosine Normalization?
Source | Cosine Normalization: Using Cosine Similarity Instead of Dot Product in Neural Networks |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Multi-layer neural networks traditionally use dot products between the output vector of previous layer and the incoming weight vector as the input to activation function. The result of dot product is unbounded. To bound dot product and decrease the variance, Cosine Normalization uses cosine similarity or centered cosine similarity (Pearson Correlation Coefficient) instead of dot products in neural networks.
Using cosine normalization, the output of a hidden unit is computed by:
where is the normalized pre-activation, is the incoming weight vector and is the input vector, () indicates dot product, is nonlinear activation function. Cosine normalization bounds the pre-activation between -1 and 1.