What is: pixel2style2pixel?
Source | Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Pixel2Style2Pixel, or pSp, is an image-to-image translation framework that is based on a novel encoder that directly generates a series of style vectors which are fed into a pretrained StyleGAN generator, forming the extended latent space. Feature maps are first extracted using a standard feature pyramid over a ResNet backbone. Then, for each of target styles, a small mapping network is trained to extract the learned styles from the corresponding feature map, where styles are generated from the small feature map, from the medium feature map, and from the largest feature map. The mapping network, map2style, is a small fully convolutional network, which gradually reduces spatial size using a set of 2-strided convolutions followed by LeakyReLU activations. Each generated 512 vector, is fed into StyleGAN, starting from its matching affine transformation, .