Abstract:Generating high-quality samples is always one of the main challenges in generative adversarial networks (GANs) field. To this end, in this study, a GANs penalty algorithm is proposed, which leverages a constructed conditional entropy distance to penalize its generator. Under the condition of keeping the entropy invariant, the algorithm makes the generated distribution as close to the target distribution as possible and greatly improves the quality of the generated samples. In addition, to improve the training efficiency of GANs, the network structure of GANs is optimized and the initialization strategy of the two networks is changed. The experimental results on several datasets show that the penalty algorithm significantly improves the quality of generated samples. Especially, on the CIFAR10, STL10, and CelebA datasets, the best FID value is reduced from 16.19, 14.10, 4.65 to 14.02, 12.83, and 3.22, respectively.