Published on Sun Nov 29 2020

Architectural Adversarial Robustness: The Case for Deep Pursuit

George Cazenavette, Calvin Murdock, Simon Lucey

Deep neural networks remain susceptible to targeted attacks by nearly imperceptible levels of adversarial noise. The underlying cause of this sensitivity is not well understood. theoretical analyses can be simplified by reframing each layer of a feed-forward network as an approximate solution to a sparse coding problem.

0
0
0
Abstract

Despite their unmatched performance, deep neural networks remain susceptible to targeted attacks by nearly imperceptible levels of adversarial noise. While the underlying cause of this sensitivity is not well understood, theoretical analyses can be simplified by reframing each layer of a feed-forward network as an approximate solution to a sparse coding problem. Iterative solutions using basis pursuit are theoretically more stable and have improved adversarial robustness. However, cascading layer-wise pursuit implementations suffer from error accumulation in deeper networks. In contrast, our new method of deep pursuit approximates the activations of all layers as a single global optimization problem, allowing us to consider deeper, real-world architectures with skip connections such as residual networks. Experimentally, our approach demonstrates improved robustness to adversarial noise.