Published on Tue May 28 2019

Brain-inspired reverse adversarial examples

Shaokai Ye, Sia Huat Tan, Kaidi Xu, Yanzhi Wang, Chenglong Bao, Kaisheng Ma

Current state-of-the-art deep learning approaches depend on the variety of training samples and the capacity of the network. In practice, the size of network is always limited and it is difficult to access all the data samples.

0
0
0
Abstract

A human does not have to see all elephants to recognize an animal as an elephant. On contrast, current state-of-the-art deep learning approaches heavily depend on the variety of training samples and the capacity of the network. In practice, the size of network is always limited and it is impossible to access all the data samples. Under this circumstance, deep learning models are extremely fragile to human-imperceivable adversarial examples, which impose threats to all safety critical systems. Inspired by the association and attention mechanisms of the human brain, we propose reverse adversarial examples method that can greatly improve models' robustness on unseen data. Experiments show that our reverse adversarial method can improve accuracy on average 19.02% on ResNet18, MobileNet, and VGG16 on unseen data transformation. Besides, the proposed method is also applicable to compressed models and shows potential to compensate the robustness drop brought by model quantization - an absolute 30.78% accuracy improvement.

Sat Nov 28 2020
Machine Learning
Generalized Adversarial Examples: Attacks and Defenses
Some works find another interesting form of adversarial examples such as one which is unrecognizable to humans, but DNNs classify it as one class with high confidence and adversarial patch. Based on this phenomenon, from the perspective of cognition of humans and machines, we propose a new definition of
0
0
0
Sat Mar 13 2021
Machine Learning
Generating Unrestricted Adversarial Examples via Three Parameters
0
0
0
Mon Feb 05 2018
Artificial Intelligence
Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples
Deep learning algorithms and networks are vulnerable to perturbed inputs. Many defense methodologies have been investigated to defend against such adversarial attack. In this work, we propose a novel methodology to defend the existing powerful attack model.
0
0
0
Wed Feb 22 2017
Artificial Intelligence
DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
DeepCloak limits the capacity an attacker can use generating adversarial samples. It can increase the performance of state-of-the-art DNN models against such inputs.
0
0
0
Mon Aug 05 2019
Machine Learning
Automated Detection System for Adversarial Examples with High-Frequency Noises Sieve
Deep neural networks are vulnerable to well-designed input samples. In particular, neural networks tend to misclassify adversarial examples that are imperceptible to humans. Our proposed system can mostly distinguish adversarial samples and benign images without human intervention.
0
0
0
Fri Apr 10 2020
Machine Learning
Luring of transferable adversarial perturbations in the black-box paradigm
A new approach to improve the robustness of a model against black-box transfer attacks. A removable additional neural network is included in the target model. This is designed to trick the adversary into choosing false directions to fool the model.
0
0
0