What is: Gaussian Gated Linear Network?
Source | Gaussian Gated Linear Networks |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Gaussian Gated Linear Network, or G-GLN, is a multi-variate extension to the recently proposed GLN family of deep neural networks by reformulating the GLN neuron as a gated product of Gaussians. This Gaussian Gated Linear Network (G-GLN) formulation exploits the fact that exponential family densities are closed under multiplication, a property that has seen much use in Gaussian Process and related literature. Similar to the Bernoulli GLN, every neuron in the G-GLN directly predicts the target distribution.
Precisely, a G-GLN is a feed-forward network of data-dependent distributions. Each neuron calculates the sufficient statistics for its associated PDF using its active weights, given those emitted by neurons in the preceding layer. It consists of consists of layers indexed by with neurons in each layer. The weight space for a neuron in layer is denoted by ; the subscript is needed since the dimension of the weight space depends on . Each neuron/distribution is indexed by its position in the network when laid out on a grid; for example, refers to the family of PDFs defined by the th neuron in the th layer. Similarly, refers to the context function associated with each neuron in layers , and and (or in the multivariate case) referring to the sufficient statistics for each Gaussian PDF.
There are two types of input to neurons in the network. The first is the side information, which can be thought of as the input features, and is used to determine the weights used by each neuron via half-space gating. The second is the input to the neuron, which is the PDFs output by the previous layer, or in the case of layer 0, some provided base models. To apply a G-GLN in a supervised learning setting, we need to map the sequence of input-label pairs for onto a sequence of (side information, base Gaussian PDFs, label) triplets \left(z\_{t},\left\(f\_{0 i}\right\)\_{i}, y\_{t}\right). The side information is set to the (potentially normalized) input features . The Gaussian PDFs for layer 0 will generally include the necessary base Gaussian PDFs to span the target range, and optionally some base prediction PDFs that capture domain-specific knowledge.