What is: Ghost Module?
Source | GhostNet: More Features from Cheap Operations |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
A Ghost Module is an image block for convolutional neural network that aims to generate more features by using fewer parameters. Specifically, an ordinary convolutional layer in deep neural networks is split into two parts. The first part involves ordinary convolutions but their total number is controlled. Given the intrinsic feature maps from the first part, a series of simple linear operations are applied for generating more feature maps.
Given the widely existing redundancy in intermediate feature maps calculated by mainstream CNNs, ghost modules aim to reduce them. In practice, given the input data , where is the number of input channels and and are the height and width of the input data, respectively, the operation of an arbitrary convolutional layer for producing feature maps can be formulated as
where is the convolution operation, is the bias term, is the output feature map with channels, and is the convolution filters in this layer. In addition, and are the height and width of the output data, and is the kernel size of convolution filters , respectively. During this convolution procedure, the required number of FLOPs can be calculated as , which is often as large as hundreds of thousands since the number of filters and the channel number are generally very large (e.g. 256 or 512).
Here, the number of parameters (in and ) to be optimized is explicitly determined by the dimensions of input and output feature maps. The output feature maps of convolutional layers often contain much redundancy, and some of them could be similar with each other. We point out that it is unnecessary to generate these redundant feature maps one by one with large number of FLOPs and parameters. Suppose that the output feature maps are ghosts of a handful of intrinsic feature maps with some cheap transformations. These intrinsic feature maps are often of smaller size and produced by ordinary convolution filters. Specifically, intrinsic feature maps are generated using a primary convolution:
where is the utilized filters, and the bias term is omitted for simplicity. The hyper-parameters such as filter size, stride, padding, are the same as those in the ordinary convolution to keep the spatial size (ie and ) of the output feature maps consistent. To further obtain the desired feature maps, we apply a series of cheap linear operations on each intrinsic feature in to generate ghost features according to the following function:
where is the -th intrinsic feature map in , in the above function is the -th (except the last one) linear operation for generating the -th ghost feature map , that is to say, can have one or more ghost feature maps . The last is the identity mapping for preserving the intrinsic feature maps. we can obtain feature maps as the output data of a Ghost module. Note that the linear operations operate on each channel whose computational cost is much less than the ordinary convolution. In practice, there could be several different linear operations in a Ghost module, eg and linear kernels, which will be analyzed in the experiment part.