Viet-Anh on Software Logo

What is: Pyramid Vision Transformer?

SourcePyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

PVT, or Pyramid Vision Transformer, is a type of vision transformer that utilizes a pyramid structure to make it an effective backbone for dense prediction tasks. Specifically it allows for more fine-grained inputs (4 x 4 pixels per patch) to be used, while simultaneously shrinking the sequence length of the Transformer as it deepens - reducing the computational cost. Additionally, a spatial-reduction attention (SRA) layer is used to further reduce the resource consumption when learning high-resolution features.

The entire model is divided into four stages, each of which is comprised of a patch embedding layer and a L_i\mathcal{L}\_{i}-layer Transformer encoder. Following a pyramid structure, the output resolution of the four stages progressively shrinks from high (4-stride) to low (32-stride).