What is: BigGAN?
Source | Large Scale GAN Training for High Fidelity Natural Image Synthesis |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
BigGAN is a type of generative adversarial network that was designed for scaling generation to high-resolution, high-fidelity images. It includes a number of incremental changes and innovations. The baseline and incremental changes are:
- Using SAGAN as a baseline with spectral norm. for G and D, and using TTUR.
- Using a Hinge Loss GAN objective
- Using class-conditional batch normalization to provide class information to G (but with linear projection not MLP.
- Using a projection discriminator for D to provide class information to D.
- Evaluating with EWMA of G's weights, similar to ProGANs.
The innovations are:
- Increasing batch sizes, which has a big effect on the Inception Score of the model.
- Increasing the width in each layer leads to a further Inception Score improvement.
- Adding skip connections from the latent variable to further layers helps performance.
- A new variant of Orthogonal Regularization.