Loading...

Investigating Effect of Data Features on Robustness of Deep Neural Networks Against Adversarial Attacks

Rastegari Nejad, Mohammad Javad | 2022

26 Viewed
  1. Type of Document: M.Sc. Thesis
  2. Language: Farsi
  3. Document No: 56432 (19)
  4. University: Sharif University of Technology
  5. Department: Computer Engineering
  6. Advisor(s): Jalili, Rasool
  7. Abstract:
  8. The impressive performance of deep neural networks in various fields has led to the spread of this technology in various organizations and businesses. Training a neural network is a costly process due to the need of collecting relevant data and high computational cost; in this sense, a trained network is considered a valuable asset to any organization. One of the most important attacks against these networks is model extraction attacks. Model extraction attacks are implemented with the aim of training an surrogate model for the target learning model, and in addition to violating the intellectual property of the target model, they can be the basis of other attacks such as an adversarial attack or a membership inference attack. In general, due to the type of attacker accessing to the training data, model extraction attacks can be divided into two categories: in-distribution attacker attacks and out-of-distribution attacker attacks. An in-distribution attacker is an attacker who does not have access to the distribution of the training data of the target model and uses data outside the distribution to implement his attack. All the previous researches related to the defense against model extraction attacks are presented with the assumption of the out-of-distribution attacker. If the attacker has access to the data inside the distribution, none of the previous studies have the ability to defend against these attacks. Meanwhile, an in-distribution attacker can significantly reduce the cost of his attack by using active learning methods. In this research, focusing on the characteristics of the data, a mechanism for valuing the input queries of each user is presented with the aim of identifying and increasing the cost of in-distribution attackers in model extraction attacks
  9. Keywords:
  10. Machine Learning Security ; Adversarial Example ; Deep Neural Networks ; Model Extraction Attacks ; Active Learning

 Digital Object List

 Bookmark

...see more