Published on Wed Aug 04 2021

On the Robustness of Domain Adaption to Adversarial Attacks

Liyuan Zhang, Yuhang Zhou, Lei Zhang

State-of-the-art deep neural networks (DNNs) have been proved to have excellent performance on unsupervised domain adaption (UDA) However, recent work shows that DNNs perform poorly when being attacked by adversarial samples.

2
0
0
Abstract

State-of-the-art deep neural networks (DNNs) have been proved to have excellent performance on unsupervised domain adaption (UDA). However, recent work shows that DNNs perform poorly when being attacked by adversarial samples, where these attacks are implemented by simply adding small disturbances to the original images. Although plenty of work has focused on this, as far as we know, there is no systematic research on the robustness of unsupervised domain adaption model. Hence, we discuss the robustness of unsupervised domain adaption against adversarial attacking for the first time. We benchmark various settings of adversarial attack and defense in domain adaption, and propose a cross domain attack method based on pseudo label. Most importantly, we analyze the impact of different datasets, models, attack methods and defense methods. Directly, our work proves the limited robustness of unsupervised domain adaptation model, and we hope our work may facilitate the community to pay more attention to improve the robustness of the model against attacking.

Sun May 10 2020
Machine Learning
Class-Aware Domain Adaptation for Improving Adversarial Robustness
convolutional neural networks are vulnerable to adversarial examples. adversarial training could overfit to a specific type of attack and lead to standard accuracy drop on clean images. We propose a novel Class-Aware Domain Adaptation (CADA) method for defending against attacks.
0
0
0
Fri Feb 05 2021
Machine Learning
Optimal Transport as a Defense Against Adversarial Attacks
Deep learning classifiers are now known to have flaws in the representations of their class. Adversarial attacks can find a human-imperceptible perturbation that will mislead a trained model. The most effective methods to defend against such attacks trains on generated adversarial examples to learn their distribution.
0
0
0
Mon Oct 01 2018
Machine Learning
Improving the Generalization of Adversarial Training with Domain Adaptation
Adversarial training is promising for improving the robustness of deep learning models. Most existing adversarial training approaches are based on a specific type of attack. It is difficult to train a model with great generalization due to the lack of representative adversarial samples. To alleviate this problem, we propose a novel Adversarial Training with Domain Adaptation method.
0
0
0
Thu Apr 01 2021
Machine Learning
Domain Invariant Adversarial Learning
The phenomenon of adversarial examples illustrates one of the most basic vulnerabilities of deep neural networks. We present a new adversarial training method, Domain Invariant Adversarial Learning (DIAL), which learns a feature representation which is both robust and domain invariant.
2
0
0
Mon Apr 19 2021
Machine Learning
Direction-Aggregated Attack for Transferable Adversarial Examples
0
0
0
Thu Oct 22 2020
Machine Learning
Defense-guided Transferable Adversarial Attacks
Deep neural networks perform challenging tasks excellently, but are susceptible to adversarial examples, which mislead classifiers. We design a max-min framework inspired by input transformations, which are benificial to both the adversarial attack and defense. Experimentally, we show that our ASR
0
0
0
Sat Dec 20 2014
Machine Learning
Explaining and Harnessing Adversarial Examples
Machine learning models consistently misclassify adversarial examples. We argue that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results.
8
4
25
Sat Dec 21 2013
Neural Networks
Intriguing properties of neural networks
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions.
2
4
16
Wed Jul 25 2012
Machine Learning
Equivalence of distance-based and RKHS-based statistics in hypothesis testing
We provide a unifying framework linking two classes of statistics used in two-sample and independence testing. The energy distance most commonly employed in statistics is just one member of a parametric family of kernels. We show that other choices from this family can yield more powerful tests.
1
0
1
Tue Oct 24 2017
Machine Learning
One pixel attack for fooling deep neural networks
Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified.
2
0
1
Fri Mar 27 2020
Computer Vision
Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations
The learning of the deep networks largely relies on the data with human-annotated labels. In some label insufficient situations, the performance degrades on the decision boundary with high data density. To improve both discriminability and diversity, we propose Batch Nuclear-norm Maximization.
1
1
1
Wed Apr 08 2020
Computer Vision
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking
0
0
0