What is: Reversible Residual Block?
Source | The Reversible Residual Network: Backpropagation Without Storing Activations |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Reversible Residual Blocks are skip-connection blocks that learn reversible residual functions with reference to the layer inputs. It is proposed as part of the RevNet CNN architecture. Units in each layer are partitioned into two groups, denoted and ; the authors find what works best is partitioning the channels. Each reversible block takes inputs and produces outputs according to the following additive coupling rules – inspired by the transformation in NICE (nonlinear independent components estimation) – and residual functions and analogous to those in standard ResNets:
Each layer’s activations can be reconstructed from the next layer’s activations as follows: