Image Adversarial Examples Generation Method Based on Fusion Gaussian Noise
Conference: CIBDA 2022 - 3rd International Conference on Computer Information and Big Data Applications
03/25/2022 - 03/27/2022 at Wuhan, China
Proceedings: CIBDA 2022
Pages: 4Language: englishTyp: PDF
Authors:
Duan, Jing; An, Yi; Yu, Ning; Gu, Liang; Liu, Haitao; Gong, Xin; Duan, Jie (Information and Communication Branch, State Grid Shanxi Electric Power Company, Taiyuan, China)
Abstract:
Neural network is one of the cutting-edge technologies, in which image classification tasks have been involved in various fields of society. With the continuous promotion and popularization of graphic classification applications, there are also some threats that cannot be ignored. Adversarial samples are a kind of security hazard that have great security risks to the neural network training model. Adversarial attacks are divided into two categories: white-box attacks and black-box attacks. Among the current methods for generating adversarial samples, white-box attacks have many methods and have achieved a high success rate, while the corresponding black-box attacks are successful. rate is still low. Adversarial examples made against black-box attacks are more realistic. In real life, the physical world we face is more of unpredictable samples, therefore, improving the transferability of adversarial samples to achieve black-box attacks is our current focus. We propose a targeted method to increase perturbation on images to increase the number of images generated, thereby effectively reducing the overfitting phenomenon of adversarial examples. We use the ImageNet dataset, and after performing single-model attack and multi-model attack experiments on 7 models, our method can not only maintain a good success rate in white-box attack, but also in black-box compared with the benchmark method. The attack also significantly improved the attack success rate.