Abstract:Artificial intelligence (AI) has been widely applied to various aspects of lives. However, as the primary technique of realizing AI, neural networks can exhibit undefined behavior when faced with inputs outside of their training data or under adversarial attacks. In safety-critical fields such as autonomous driving and intelligent healthcare, undefined behavior can pose significant threats to human safety. Therefore, it is particularly crucial to employ complete verification methods to verify the properties of neural networks and ensure the correctness of the behavior. To enhance the verification efficiency, various complete neural network verification tools have proposed their optimization methods. However, the true influence of these methods has not been thoroughly explored, making it challenging for researchers to identify the most effective optimization directions. This paper introduces the common techniques in neural network verification and presents a universal framework for complete neural network verification. Within this framework, it focuses on discussing the optimization methods employed by state-of-the-art tools for constraint solving, branch selection, and boundary computation. Meanwhile, a series of experiments are designed to investigate the contributions of various acceleration techniques to the performance of each tool and to explore the most effective acceleration strategies and more promising optimization directions. Finally, valuable references can be provided for researchers in this field.