Loading...

Effect of Generated Data on the Robustness of Adversarial Distillation Methods

Kashani, Paria | 2022

59 Viewed
  1. Type of Document: M.Sc. Thesis
  2. Language: Farsi
  3. Document No: 56667 (19)
  4. University: Sharif University of Technology
  5. Department: Computer Engineering
  6. Advisor(s): Jafari Siavoshani, Mahdi
  7. Abstract:
  8. Nowadays, neural networks are used as the main method in most machine learning applications. But research has shown that these models are vulnerable to adversarial attacks imperceptible changes to the input of neural networks that cause the net- work to be deceived and predict incorrectly. The importance of this issue in sensitive and security applications of neural networks, such as self-driving cars and medical diagnosis systems, becomes much higher. In recent years, many researches have been done in the field of making neural net- works robust against this threat, but in most of them, higher robustness has been provided on the basis of larger and more complex models. Few researches have paid attention to providing adversarial robustness in small networks suitable for devices with limited memory and processing power. Another challenge in this field is the reduction of accuracy on natural images in networks that have been trained adversarially. In this research, solutions for creating adversarial robustness in small networks suit- able for devices with limitations is investigated. A method is presented that combines the latest advances in increasing natural accuracy and reducing the size of robust models, and trains a neural network at a relatively small scale, high speed and with higher natural and adversarial accuracy than existing examples. In this method, a teacher model is used that has reached the highest current robust accuracy by training on synthetic data. Using knowledge distillation methods that have recently proven their effectiveness in increasing robustness by placing them alongside adversarial training and other methods derived from it, the knowledge stored in the teacher network is transferred to student networks and the impact of additional data added to the training process on the performance of these methods is measured. With this work, a student network reaches 58.3% robust accuracy and 85.03% natural accuracy with 18-Resnet architecture and CIFAR-10 dataset in one of these experiments, which are about 1% and 2% higher than the robust and natural accuracy of this architecture with direct adversarial training, respectively
  9. Keywords:
  10. Deep Neural Networks ; Compression ; Data Generation ; Adversarial Robust Training ; Distillation

 Digital Object List

 Bookmark

  • مقدمه
    • تعریف مسئله
      • تعریف‌ها
    • روش‌های دفاعی
    • اهمیت و کاربرد
    • چالش‌ها
    • اهداف تحقیق
    • ساختار پایان‌نامه
  • پژوهش‌های پیشین
    • تعریف علامت‌ها
    • مروری بر حمله‌های خصمانه
      • نسل اول حمله‌ها
      • حمله‌ی خودکار
      • حملات تطبیقی
    • مروری بر روش‌های دفاعی
      • آموزش تخاصمی
      • دفاع‌های مبتنی بر افزایش داده
      • دفاع‌های مبتنی بر تقطیر دانش
    • بیش‌برازش استوار
    • مدل‌های عمیق سبک‌وزن
    • جمع‌بندی
  • راهکار پیشنهادی
    • روش‌های اولیه
    • خلاصه‌ی روش اصلی
    • آموزش مدل معلم
      • مقاله‌ی بهبود استواری با استفاده از داده‌های تولید شده
      • اجرای آموزش
    • انتقال دانش به مدل دانش‌آموز
    • جمع‌بندی
  • آزمایش‌ها
    • مجموعه دادگان
    • تنظیمات آزمایش‌ها
      • جزئیات پیاده‌سازی
      • برنامه‌ریزی نرخ یادگیری
      • حمله‌ی مورد استفاده برای آموزش
    • معیار ارزیابی
    • نتایج
      • مشاهده‌ی بیش‌برازش استوار در آزمایش‌ها
    • جمع‌بندی
  • جمع‌بندی و کارهای آتی
    • جمع‌بندی
    • کارهای آینده
...see more