Abstract:Recently, deep learning has been widely used in image classification and image recognition, which has achieved satisfactory results and has become the important part of AI applications. During the continuous exploration of the accuracy of models, recent studies have proposed the concept of "adversarial examples". By adding small perturbations to the original samples, it can greatly reduce the accuracy of the original classifier and achieve the purpose of anti-deep learning, which provides new ideas for deep learning attackers, and also puts forward new requirements for defenders. On the basis of introducing the origin and principle of generating adversarial examples, this paper summarizes the research and papers on generating adversarial examples in recent years, and divides these algorithms into two categories:entire pixel perturbation and partial pixel perturbation. Then, the secondary classification criteria (targeted and not targeted, black-box test and white-box test, visible and invisible) were used for secondary classification. At the same time, the MNIST data set is used to validate the methods, which proves the advantages and disadvantages of the various methods. Finally, this paper summarizes the challenges of generating adversarial examples and the direction of their development, and also discusses the future of them.