Loading...
Adversarial Robustness Against Perceptual and Unforeseen Attacks in Deep Neural Networks in Images
Azizmalayeri, Mohammad | 2021
560
Viewed
- Type of Document: M.Sc. Thesis
- Language: Farsi
- Document No: 54394 (19)
- University: Sharif University of Technology
- Department: Computer Engineering
- Advisor(s): Rohban, Mohammad Hossein
- Abstract:
- Improvements in deep neural networks and their widespread use in research and practical application have raised significant concerns about the robustness of these networks against adversarial examples designed to deceive a deep network in calcu- lating the correct output with a slight change in input. Since this is an essential issue in highly sensitive applications, it is necessary to use a training method that reduces the model’s sensitivity to these changes but still preserves the accuracy. The common method for this goal is training the model with adversarial examples. In other words, adversarial examples are generated during the training, and the model is trained with them. This allows the model to learn more robust features to deal with particular types of adversarial examples used in the training process and have a better accuracy against them. However, the model still cannot resist other methods of generating adversarial examples that were not used in training. To mitigate this issue, we proposed a new method for generating adversarial examples that use the Lagrange multiplier to maximize the loss and minimize the size of the perturbation. The main reason for the proper performance of this method is selecting the perturbation size for each training sample in proportion to the model loss for that sample that lets minor perturbation for misclassified examples and larger perturbation for others. Our investigations show that our method increases the average robustness of the model against adversarial examples in CIFAR-10 and ImageNet-100 by 5.9% and 3.2%, respectively
- Keywords:
- Deep Neural Networks ; Adversarial Example ; Adversarial Robust Training ; Unseen Attacks
- محتواي کتاب
- view
- مقدمه
- مفاهیم اولیه
- کارهای پیشین
- روشهای پیشنهادی
- نتایج
- نتیجهگیری و پیشنهادها