Adversarial Robustness of Deep Neural Networks in Text Domain, M.Sc. Thesis Sharif University of Technology ; Soleymani Baghshah, Mahdieh (Supervisor)
Abstract
In recent years, neural networks have been widely used in most machine learning domains. However, it has been shown that these networks are vulnerable to adversarial examples. adversarial examples are small and imperceptible perturbations applied to the input which lead to producing wrong output and thus, fooling the network. This will become an important issue in security related applications of deep neural networks, such as self-driving cars and medical diagnostics. Since, in the wort-case scenario, even human lives could be threatened. Although, many works have focused on crafting adversarial examples for image data, only a few studies have been done on textual data due to the existing...
Cataloging briefAdversarial Robustness of Deep Neural Networks in Text Domain, M.Sc. Thesis Sharif University of Technology ; Soleymani Baghshah, Mahdieh (Supervisor)
Abstract
In recent years, neural networks have been widely used in most machine learning domains. However, it has been shown that these networks are vulnerable to adversarial examples. adversarial examples are small and imperceptible perturbations applied to the input which lead to producing wrong output and thus, fooling the network. This will become an important issue in security related applications of deep neural networks, such as self-driving cars and medical diagnostics. Since, in the wort-case scenario, even human lives could be threatened. Although, many works have focused on crafting adversarial examples for image data, only a few studies have been done on textual data due to the existing...
Find in contentBookmark
|
|