Abstract:With the advent of the intelligent age, the applications of intelligent systems equipped with deep neural networks (DNNs) have penetrated into every aspect of our life. However, due to the black-box and large-scale characteristics, the predictions of the neural networks are difficult to be completely convincing. When neural networks are applied to security-critical fields such as autonomous driving, how to ensure their security is still a great challenge for the academia and industry. For this reason, the academia carried out much research on robustness—a kind of special security of neural networks, and proposed many algorithms for robustness analysis and verification. The verification algorithms for feedforward neural networks (FNNs) include precise algorithms and approximate algorithms, which have been developed relatively prosperously. The verification algorithms for other types of networks, such as recurrent neural networks (RNNs), are still in the primary stage. This study reviews the current development of DNNs and the challenges of deploying them into our life. It also exhaustively investigates the robustness verification algorithms of FNNs and RNNs, analyzes and compares the intrinsic connection among these algorithms. The security verification algorithms of RNNs in specific application scenarios are investigated, and the future research directions in the robustness verification field of neural networks are clarified.