Viet-Anh on Software Logo

What is: Reversible Residual Block?

SourceThe Reversible Residual Network: Backpropagation Without Storing Activations
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Reversible Residual Blocks are skip-connection blocks that learn reversible residual functions with reference to the layer inputs. It is proposed as part of the RevNet CNN architecture. Units in each layer are partitioned into two groups, denoted x_1x\_{1} and x_2x\_{2}; the authors find what works best is partitioning the channels. Each reversible block takes inputs (x_1,x_2)\left(x\_{1}, x\_{2}\right) and produces outputs (y_1,y_2)\left(y\_{1}, y\_{2}\right) according to the following additive coupling rules – inspired by the transformation in NICE (nonlinear independent components estimation) – and residual functions FF and GG analogous to those in standard ResNets:

y_1=x_1+F(x_2)y\_{1} = x\_{1} + F\left(x\_{2}\right) y_2=x_2+G(y_1)y\_{2} = x\_{2} + G\left(y\_{1}\right)

Each layer’s activations can be reconstructed from the next layer’s activations as follows:

x_2=y_2G(y_1) x\_{2} = y\_{2} − G\left(y\_{1}\right) x_1=y_1F(x_2) x\_{1} = y\_{1} − F\left(x\_{2}\right)