Loading...

Defending Traffic Unobservability through Thwarting Statistical Features

Karimi, Mohammad Reza | 2019

321 Viewed
  1. Type of Document: M.Sc. Thesis
  2. Language: Farsi
  3. Document No: 53338 (19)
  4. University: Sharif University of Technology
  5. Department: Computer Engineering
  6. Advisor(s): Jalili, Rasool
  7. Abstract:
  8. Governments and organizations need to classify network trac using deep packet inspection systems, by protocols, applications, and user’s behavior, to monitor, control, and enforce law and governance to the online behavior of its citizens and human resources. The high capacity of machine learning in the classication problem has led trac monitoring systems to use machine learning.The development of machine learning-based trac monitoring systems in the eld of research has reached relative maturity and has reached the border of industrial, commercial and governmental use. In the latest trac classi-cation studies using neural networks, as the most ecient machine learning methods, the classication accuracy has reached over 98. The high accuracy of classication in machine learning algorithms especially neural networks is vulnerable to adversarial examples. An adversarial example is an instance of data designed to classify neural networks by mistake. The failure of the neural network to generated adversarial examples is considered an attack on the neural network.Various methods to protect trac against monitoring and classication have been reviewed in this study, each with a dierent design attempting to eliminate the signature that characterizes network trac behavior. But the innovation of this study is that instead of creating a design to bypass and classify trac, it exploits the machine learning classier vulnerability to attacks by adversarial example generating algorithms. Three convolutional neural network classiers were trained in a network trac dataset called ISCX for three dierent vector attributes of trac characteristics. Adversarial examples generated for those three classiers using ve adversarial example generator algorithms,FGMS, Carlin-Wenger L-2, JSMA, Deep Fell, and Universal perturbation. Each attack has been evaluated and compared with other attacks with the criterion of accuracy reduction, true positive reduction, and overhead. For example, in a JSMA attack on one of the classiers, true and positive accuracy of 80 percent,with only 47 percent tolerating an increase in overhead, reached 8 percent and 39 percent, respectively. This study shows that deep-learning trac classi-cation methods are inecient and fragile in the face of adversarial example attacks, despite their high performance and accuracy
  9. Keywords:
  10. Website Fingerprinting ; Deep Learning ; Traffic Classification ; Adversarial Machine Learning ; Adversarial Example

 Digital Object List

 Bookmark

...see more