What is: Adaptive Content Generating and Preserving Network?
Source | Towards Photo-Realistic Virtual Try-On by Adaptively Generating-Preserving Image Content |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
ACGPN, or Adaptive Content Generating and Preserving Network, is a generative adversarial network for virtual try-on clothing applications.
In Step I, the Semantic Generation Module (SGM) takes the target clothing image , the pose map , and the fused body part mask as the input to predict the semantic layout and to output the synthesized body part mask and the target clothing mask \mathcal{M}^{S\_{c}.
In Step II, the Clothes Warping Module (CWM) warps the target clothing image to according to the predicted semantic layout, where a second-order difference constraint is introduced to stabilize the warping process.
In Steps III and IV, the Content Fusion Module (CFM) first produces the composited body part mask using the original clothing mask , the synthesized clothing mask , the body part mask , and the synthesized body part mask , and then exploits a fusion network to generate the try-on images by utilizing the information , , and the body part image from previous steps.