Wasserstein Gradient Penalty Loss, or WGAN-GP Loss, is a loss used for generative adversarial networks that augments the Wasserstein loss with a gradient norm penalty for random samples x^∼P_x^ to achieve Lipschitz continuity:
L=E_x^∼P_g[D(x~)]−E_x∼P_r[D(x)]+λE_x^∼P_x^[(∣∣∇_x~D(x~)∣∣_2−1)2]
It was introduced as part of the WGAN-GP overall model.