Abstract:Freehand sketches can intuitively present users’ creative intention by drawing simple lines and enable users to express their thinking process and design inspiration or produce target images or videos. With the development of deep learning methods, sketch-based visual content generation performs cross-domain feature mapping by learning the feature distribution between sketches and visual objects (images and videos), enabling the automated generation of sketches from images and the automated generation of images or videos from sketches. Compared with traditional artificial creation, it effectively improves the efficiency and diversity of generation, which has become one of the most important research directions in computer vision and graphics and plays an important role in design, visual creation, etc. Therefore, this study presents an overview of the research progress and future development of deep learning methods for sketch-based visual content generation. The study classifies the existing work into sketch-based image generation and sketch-based video generation according to different visual objects and analyzes the generation models in detail with a combination of specific tasks including cross-domain generation between sketch and visual content, style transfer, and editing of visual content. Then, it summarizes and compares the commonly used datasets and points out sketch propagation methods to address in sufficient sketch data and evaluation methods of generated models. Furthermore, the study prospects the research trend based on the challenges faced by the sketch in the application of visual content generation and the future development direction of generated models.