Abstract:Constraint solving is a fundamental approach for verifying deep neural networks (DNNs). In the field of AI safety, DNNs often undergo modifications in their structure and parameters for purposes such as repair or attack. In such scenarios, we propose the problem of incremental DNN verification, which aims to determine whether a safety property still holds after the DNN has been modified. To address this problem, we present an incremental satisfiability modulo theory (SMT) algorithm based on the Reluplex framework. Our algorithm, called DeepInc, simulates the key features of the configurations that infer the verification results of the old solving procedure . It heuristically checks whether the proofs remain valid for the modified DNN. Experimental results demonstrate that DeepInc outperforms Marabou in efficiency in most cases. Moreover, for cases where the property is violated both before and after modification, DeepInc achieves significantly faster acceleration, even when compared to the state-of-the-art verifier α, β-CROWN.