Abstract:Single Image Super Resolution (SISR) refers to the reconstruction of high resolution images from a low resolution image. Traditional neural network methods typically perform super-resolution reconstruction in the spatial domain of an image, but these methods often ignore important details in the reconstruction process. In view of the fact that wavelet transform can separate the "rough" and "detail" features of image content, this study proposes a wavelet-based deep residual network (DRWSR). Different from other traditional convolutional neural networks, the high-resolution image (HR) is directly derived. This method uses a multi-stage learning strategy to first infer the wavelet coefficients corresponding to the high-resolution image and then reconstruct the super-resolution image (SR). In order to obtain more information, the method uses a flexible and scalable deep neural network with residual nested residuals. In addition, the proposed neural network model is optimized by combining the loss function of image space and wavelet domain. The proposed method is carried out on Set5, Set14, BSD100, Urban100, and other datasets. The experimental results show that the proposed visual effect and peak signal-to-noise ratio (PNSR) are better than the related image super-resolution method.