What is: RevNet?
Source | The Reversible Residual Network: Backpropagation Without Storing Activations |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
A Reversible Residual Network, or RevNet, is a variant of a ResNet where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. The result is a network architecture whose activation storage requirements are independent of depth, and typically at least an order of magnitude smaller compared with equally sized ResNets.
RevNets are composed of a series of reversible blocks. Units in each layer are partitioned into two groups, denoted and ; the authors find what works best is partitioning the channels. Each reversible block takes inputs and produces outputs according to the following additive coupling rules – inspired the transformation in NICE (nonlinear independent components estimation) – and residual functions and analogous to those in standard ResNets:
Each layer’s activations can be reconstructed from the next layer’s activations as follows:
Note that unlike residual blocks, reversible blocks must have a stride of 1 because otherwise the layer discards information, and therefore cannot be reversible. Standard ResNet architectures typically have a handful of layers with a larger stride. If we define a RevNet architecture analogously, the activations must be stored explicitly for all non-reversible layers.