Abstract:With the significant success of deep learning in fields such as computer vision and natural language processing, researchers in software engineering have begun to explore its integration into solving software engineering tasks. Existing research indicates that deep learning exhibits advantages in various code-related tasks, such as code retrieval and code summarization, that traditional methods and machine learning cannot match. Deep learning models trained for code-related tasks are referred to as deep code models. However, similar to natural language processing and image processing models, the security of deep code models faces numerous challenges due to the vulnerability and inexplicability of neural networks. It has become a research focus in software engineering. In recent years, researchers have proposed numerous attack and defense methods for deep code models. Nevertheless, there is a lack of a systematic review of research on deep code model security, hindering the rapid understanding of subsequent researchers in this field. To provide a comprehensive overview of the current research, challenges, and latest findings in this field, this study collects 32 relevant papers and categorizes existing research results into two main classes: backdoor attack and defense techniques, and adversarial attack and defense techniques. This study systematically analyzes and summarizes the collected papers based on the above two categories. Subsequently, it outlines commonly used experimental datasets and evaluation metrics in this field. Finally, it analyzes key challenges in this field and suggests feasible future research directions, aiming to provide valuable guidance for further advancements in the security of deep code models.