Abstract:Single-Image super-resolution reconstruction is undercut by the problem of ambiguity. For a given low-resolution (LR) patch, there are several corresponding high-resolution (HR) patches. Learning-Based approaches suffer from this hindrance and are only capable of learning the inverse mapping from the LR patch to the mean of these HR patches, resulting in visually blurred result. In order to alleviate the high frequency loss caused by ambiguity, this paper presents a deep network for image super-resolution utilizing the online retrieved data to compensate high-frequency details. This method constructs a deep network to predict the HR reconstruction through three paths:A bypass connection directly inputting the LR image to the last layer of the network; an internal high-frequency information inference path regressing the HR images based on the input LR image, to reconstruct the main structure of the HR images; and another external high-frequency information compensation path enhancing the results of internal inference based on the online retrieved similar images. In the second path, to effectively extract the high-frequency details adaptively for the reconstruction of the internal inference, the high-frequency details are transferred under the constraints measured by hierarchical features. Compared with previous conventional cloud-based image super-resolution methods, the proposed method is end-to-end trainable. Thus, after training on a large dataset, the proposed method is capable of modeling internal inference and external compensation, and making a good trade-off between these two terms to obtain the best reconstruction result. The experimental results on image super-resolution demonstrate the superiority of the proposed method to not only conventional data-driven image super-resolution methods but also recently proposed deep learning approaches in both subjective and objective evaluations.