Loading...

Security Evaluation of Deep Neural Networks in the Presence of an Adversary

Kargar Novin, Omid | 2021

412 Viewed
  1. Type of Document: M.Sc. Thesis
  2. Language: Farsi
  3. Document No: 54102 (19)
  4. University: Sharif University of Technology
  5. Department: Computer Engineering
  6. Advisor(s): Jalili, Rasool
  7. Abstract:
  8. There has been a noticable surge in the usage of machine learning techniques in various fields, such as security related fields. With this growing pace of using machine learning to solve various problems, securing these models against attackers has become one of the main topics of machine learning literatures. Recent work has shown that in an adversarial environment, machine learning models are vulernable, and attackers can create carefully crafted inputs to fool the models. With the advent of deep neural networks, many researchers have used deep neural networks for the task of malware detection, and they have achieved impresive results. Finding the vulnerlabilities of these models is an urgent task, and we must make our models robust against attacks before we can safely deploy our models, specially in security related fields such as malware detection, and failing to do so will cause lot of negative consequences. In this research, we first present a black-box source code based adversarial malware generation framework that can be used to evaluate different malware detection models against real world adversaries. we then show that by combining the representation power of graph convultional networks with the function call graph, API calls and non-negative weights, we can build a robust malware detection model that is resistant to previous attacks and against the source code based adversarial generation framework that is presented in this research
  9. Keywords:
  10. Deep Learning ; Graph Neural Network ; Malware Detection ; Adversarial Example ; Security Assessment

 Digital Object List

 Bookmark

...see more