Regularization and Normalization For Generative Adversarial Networks: A Survey

Ziqiang Li, Rentuo Tao, Pengfei Xia, Huanhuan Chen, Bin Li

Generative Adversarial Networks (GANs), a popular generative model, have been widely applied in different scenarios thanks to the development of deep neural networks. The proposal of standard GAN is based upon the non-parametric assumption of the infinite capacity of networks. It is still unknown whether GANs can generate realistic samples without any prior. Due to excessive assumptions, many issues need to be addressed in GANs training, such as non-convergence, mode collapses, gradient disappearance, and the sensitivity of hyperparameters. As acknowledged, regularization and normalization are common methods of introducing prior information and can be used for stability training as well. At present, many regularization and normalization methods are proposed in GANs.In order to explain these methods in a systematic manner, this paper summarizes regularization and normalization methods used in GANs and classifies them into seven groups: Gradient penalty, Norm normalization and regularization, Jacobian regularization, Layer normalization, Consistency regularization, Data Augmentation, and Self-supervision. This paper presents the analysis of these methods and highlights the possible future studies in this area.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment