Published on Thu Jul 02 2020

Trace-Norm Adversarial Examples

Ehsan Kazemi, Thomas Kerdreux, Liqiang Wang

White box adversarial perturbations are sought via iterative optimization. Constraining the adversarial search with different norms results in disparately structured adversarial examples. These new structures for adversarial example, yet pervasive in optimization, are for instance a challenge for theoretical certification.

0
0
0
Abstract

White box adversarial perturbations are sought via iterative optimization algorithms most often minimizing an adversarial loss on a neighborhood of the original image, the so-called distortion set. Constraining the adversarial search with different norms results in disparately structured adversarial examples. Here we explore several distortion sets with structure-enhancing algorithms. These new structures for adversarial examples, yet pervasive in optimization, are for instance a challenge for adversarial theoretical certification which again provides only certificates. Because adversarial robustness is still an empirical field, defense mechanisms should also reasonably be evaluated against differently structured attacks. Besides, these structured adversarial perturbations may allow for larger distortions size than their counter-part while remaining imperceptible or perceptible as natural slight distortions of the image. Finally, they allow some control on the generation of the adversarial perturbation, like (localized) bluriness.

Mon Feb 15 2021
Artificial Intelligence
Generating Structured Adversarial Attacks Using Frank-Wolfe Method
White box adversarial perturbations are generated via iterative optimization algorithms. Constraining the adversarial search with different norms results in disparately structured adversarial examples. These new structures for adversarial example might provide challenges for provable and empirical robust mechanisms.
0
0
0
Tue Jan 02 2018
Machine Learning
High Dimensional Spaces, Deep Learning and Adversarial Examples
In this paper, we analyze deep learning from a mathematical point of view. The results are based on intriguing mathematical properties of high dimensional spaces. We show how multiresolution nature of natural images explains perturbation based adversarial examples in form of a stronger result.
0
0
0
Mon Aug 05 2019
Machine Learning
A principled approach for generating adversarial images under non-smooth dissimilarity metrics
Deep neural networks perform well on real world data but are prone to small changes in the input easily lead to misclassification. In this work, we propose an attack methodology not only for cases where the perturbations are measured by norms, but in fact for any adversarial
0
0
0
Sun Feb 14 2021
Machine Learning
Perceptually Constrained Adversarial Attacks
The structural similarity index (SSIM) measure was developed originally to measure the perceptions of images. SSIM-constrained adversarial attacks can break state-of-the-art classifiers and achieve similar or larger success rate than the elastic net attack.
0
0
0
Thu Feb 21 2019
Machine Learning
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
A growing area of work has studied the existence of adversarial examples, datapoints which have been perturbed to fool a classifier. The majority of these works have focused on threat models defined by norm-bounded perturbations. In this paper, we propose a new threat model for adversarial
0
0
0
Thu Feb 28 2019
Machine Learning
On the Effectiveness of Low Frequency Perturbations
Adversarial perturbations have been shown to cause state-of-the-art models to yield extremely inaccurate outputs. We empirically show that performance improvements in both the white-box and black-box transfer settings are yielded only when low frequency components are preserved.
0
0
0