Abstract:Artificial intelligence(AI) has been widely applied to various aspects of our lives. However, neural networks, as the primary technique of realizing AI, can exhibit undefined behavior when faced with inputs outside of their training data or under the adversarial attacks. In safety-critical domains such as autonomous driving and intelligent healthcare, these undefined behaviors could pose significant threats to human safety. Therefore, it is crucial to use complete verification methods to establish the properties of neural networks and ensure their behavior is well-understood and reliable. To enhance efficiency, various complete neural network verification tools have proposed their own optimization methods. However, the true impact of these methods has not been thoroughly explored, making it challenging for researchers to identify the most effective optimization directions. This survey introduces the common techniques in the field of neural network verification and presents a universal framework for complete neural network verification. Within this framework, we focus on discussing the optimization methods employed by the state-of-the-art tools for constraint solving, branch selection, and boundary computation. We have designed a series of experiments to investigate the contributions of various acceleration techniques to the performance of each tool and to explore the most effective acceleration strategies and more promising optimization directions. These results aim to provide valuable references for researchers in the field.