Abstract:Face-spoofing detection based on deep convolutional neural networks has achieved good performance in recent years. However, deep neural networks are vulnerable to adversarial examples, which will reduce the safety of the face based application systems. Therefore, it is necessary to analyze the mechanism of generating the adversarial examples, so that the face-spoofing detection algorithms will be more robust. Compared with the general classification problems, face-spoofing detection has the smaller inter-class distance, and the perturbation is difficulty to assign. Motivated by the above, this study proposes an approach to generate the adversarial examples for face-spoofing detection by combining the minimum perturbation dimensions and visual concentration. In the proposed approach, perturbation is concentrated on a few pixels in a single component, and the intervals between pixels are constrained-according to the visual concentration. With such constraints, the generated adversarial examples can be perceived by human with low probability. The adversarial examples generated from the proposed approach can defraud the deep neural networks based classifier with only 1.36% changed pixels on average. Furthermore, human vision perception rate of the proposed approach decreases about 20% compared with DeepFool.