Loading...

Adversarial Robustness of Deep Learning Models in Brain Medical Images with a focus on Alzheimer's disease

Hemmati, Mohammad | 2023

35 Viewed
  1. Type of Document: M.Sc. Thesis
  2. Language: Farsi
  3. Document No: 56431 (05)
  4. University: Sharif University of Technology
  5. Department: Electrical Engineering
  6. Advisor(s): Bagheri Shouraki, Saeed
  7. Abstract:
  8. In recent years, adversarial attacks have become a severe challenge in the field of security and stability of neural networks. These attacks in the field of medical images are able to cause misdiagnosis of the network, while hostile samples are considered normal from the perspective of a human observer. There are different ways to deal with these attacks. These methods mainly require full access to the neural network and knowledge of the type of hostile attack, as a result, the performance of these defense methods against unknown attacks drops drastically. In this research, we aim to find a method for identifying adversarial samples without knowing the architecture of the target neural network and without knowing the type of attack by examining the adversarial noise in the frequency domain and measuring the Euclidean distance of the adversarial samples with the input dataset manifold. to recognize hostile examples. For this purpose, we use a self-encrypting network based on the U-Net architecture, which is able to perform the reconstruction of the input image well in addition to fitting the data manifold. The current research is focused on Alzheimer's disease, because Alzheimer's, unlike other diseases (such as brain tumor or partial reduction of the gray matter of the brain), does not have local characteristics that can be recognized explicitly
  9. Keywords:
  10. Deep Learning ; Neural Networks ; Alzheimer ; Adversarial Attacks ; Adversarial Example ; Adversarial Attacks Detection ; Two Dimentional Fourier Transform

 Digital Object List