Abstract:Single image super-resolution (SR) is an important task in image synthesis. Based on neural nets, the loss function in the SR task commonly contains a content-based reconstruction loss and a generative adversarial network (GAN) based regularization loss. However, due to the instability of GAN training, the generated discriminative signal of a high-resolution image from the GAN loss is not stable in the SRGAN model. In order to alleviate this problem, based on the commonly used VGG reconstruction loss, this study designs a stable energy-based regularization loss, which is called VGG energy loss. The proposed VGG energy loss in this study uses the VGG encoder in the reconstruction loss as an encoder, and designs the corresponding decoder to build a VGG-U-Net auto encoder:VGG-UAE; by using the VGG-UAE as the energy function, which can provide gradients for the generator, the generated high-resolution samples track the energy flow of real data. Experiments verify that a generative model using the proposed VGG energy loss can generate more effective high-resolution images.