Loading...
Generating unrestricted adversarial examples via three parameteres
Naderi, H ; Sharif University of Technology | 2022
46
Viewed
- Type of Document: Article
- DOI: 10.1007/s11042-022-12007-x
- Publisher: Springer , 2022
- Abstract:
- Deep neural networks have been shown to be vulnerable to adversarial examples deliberately constructed to misclassify victim models. As most adversarial examples have restricted their perturbations to the Lp-norm, existing defense methods have focused on these types of perturbations and less attention has been paid to unrestricted adversarial examples; which can create more realistic attacks, able to deceive models without affecting human predictions. To address this problem, the proposed adversarial attack method generates an unrestricted adversarial example with a limited number of parameters. The attack selects three points on the input image and based on their locations transforms the image into an adversarial example. By limiting the range of movement and location of these three points and by using a discriminatory network, the proposed unrestricted adversarial example preserves the image appearance. Experimental results show that the proposed adversarial examples obtain an average success rate of 93.5% in terms of human evaluation on the MNIST and SVHN datasets. It also reduces the model accuracy by an average of 73% on six datasets MNIST, FMNIST, SVHN, CIFAR10, CIFAR100, and ImageNet. The adversarial train of the attack also improves the model robustness against a randomly transformed image. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature
- Keywords:
- Adversarial training ; Attack ; Unrestricted adversarial examples ; Image enhancement ; Attack methods ; Human evaluation ; Image appearance ; Input image ; Model robustness ; Modeling accuracy ; Transformation ; Unrestricted adversarial example ; Deep neural networks
- Source: Multimedia Tools and Applications ; Volume 81, Issue 15 , 2022 , Pages 21919-21938 ; 13807501 (ISSN)
- URL: https://dl.acm.org/doi/10.1007/s11042-022-12007-x
