Loading...

Adversarial Attacks on Deep Neural Networks

Naderi, Hanieh | 2022

128 Viewed
  1. Type of Document: Ph.D. Dissertation
  2. Language: Farsi
  3. Document No: 55660 (19)
  4. University: Sharif University of Technology
  5. Department: Computer Engineering
  6. Advisor(s): Kasaei, Shohreh
  7. Abstract:
  8. The remarkable progress of deep neural networks in recent years has led to their entry into the industry and their use in the real world. However, one of the most important and basic issues that threaten the security of these networks is attacks. The attacks that deliberately manipulate input data cause vulnerabilities and misclassify networks. Due to the wide range of ways in which attacks can perturb input data, identifying their types is considered a vital part of ensuring a robust network. The inability of deep networks to generalize to unseen data is also an important limitation. This thesis presents a 2D adversarial attack and a 3D defense in this regard.In 2D attacks, the type of attack is an important aspect to consider. In order to preserve the appearance of the adversarial image, most attacks make very limited changes to the adversarial images with the least distance from the original images. The use of such a limit has neglected a group of natural changes in images that networks are vulnerable to. Real-world applications have a high probability of undergoing these changes. In the first phase of the Thesis, an unrestricted attack on networks is designed using geometric transformations such as translation, scaling, and rotation, which can create realistic images that deceive networks. The attack reduces the training cost by reducing the number of parameters that can be trained. Extensive experimental results show that the attack success rate on famous networks such as ResNet , LeNet , VGG and Inception-v3 is about 73 % on average.In the second phase, a defense against 3D attacks is proposed. By reducing the over-sensitivity of deep neural networks to slight changes in the 3D point cloud, this defense makes them robust against all kinds of attacks. In this defense, features with unnecessary information are removed from the training point cloud and this data is used to train the network. The proposed defense method is evaluated on famous networks PointNet , PointNet++ , DGCNN . On average, this method achieves better accuracy against six different attacks than other defense methods, which can be noted to improve the accuracy by 3.8 % and 4.26 % against the two attacks Drop100 and Drop200.
  9. Keywords:
  10. Deep Neural Networks ; Adversarial Attacks ; Defense ; Adversarial Training

 Digital Object List

 Bookmark

...see more