Loading...
Towards improving robustness of deep neural networks to adversarial perturbations
Amini, S ; Sharif University of Technology | 2020
449
Viewed
- Type of Document: Article
- DOI: 10.1109/TMM.2020.2969784
- Publisher: Institute of Electrical and Electronics Engineers Inc , 2020
- Abstract:
- Deep neural networks have presented superlative performance in many machine learning based perception and recognition tasks, where they have even outperformed human precision in some applications. However, it has been found that human perception system is much more robust to adversarial perturbation, as compared to these artificial networks. It has been shown that a deep architecture with a lower Lipschitz constant can generalize better and tolerate higher level of adversarial perturbation. Smooth regularization has been proposed to control the Lipschitz constant of a deep architecture and in this work, we show how a deep convolutional neural network (CNN), based on non-smooth regularization of convolution and fully connected layers, can present enhanced generalization and robustness to adversarial perturbation, simultaneously. We propose two non-smooth regularizers that present specific features for adversarial samples with different levels of signal-to-noise ratios. The regularizers build direct interconnections for the weight matrices in each layer, through which they control the Lipschitz constant of architecture and improve the consistency of input-output mapping of the network. This leads to more reliable and interpretable network mapping and reduces abrupt changes in the networks output. We develop an efficient algorithm to solve the non-smooth learning problems, which presents a gradual complexity addition property. Our simulation results over three benchmark datasets signify the superiority of the proposed formulations over previously reported methods for improving the robustness of deep architecture, towards human robustness to adversarial samples. © 1999-2012 IEEE
- Keywords:
- Convolutional neural network ; Gradient descent ; Interpretable ; Proximal operator ; Regularizer ; Robust ; Convolution ; Convolutional neural networks ; Deep learning ; Mapping ; Multilayer neural networks ; Network architecture ; Robustness (control systems) ; Signal to noise ratio ; Artificial networks ; Benchmark datasets ; Deep architectures ; Human perception systems ; Input-output mapping ; Learning problem ; Lipschitz constant ; Perception and recognition ; Deep neural networks
- Source: IEEE Transactions on Multimedia ; Volume 22, Issue 7 , 2020 , Pages 1889-1903
- URL: https://ieeexplore.ieee.org/document/8970483