Published on Sun Jun 21 2020

Graph Backdoor

Zhaohan Xi, Ren Pang, Shouling Ji, Ting Wang

Deep neural networks (DNNs) have inherent vulnerability to backdoor attacks. GTA is the first backdoor attack on GNNs. GTA defines triggers as specific subgraphs, entailing a large design spectrum for the adversary. GTA can be readily launched without knowledge regarding downstream models.

0
0
0
Abstract

One intriguing property of deep neural networks (DNNs) is their inherent vulnerability to backdoor attacks -- a trojan model responds to trigger-embedded inputs in a highly predictable manner while functioning normally otherwise. Despite the plethora of prior work on DNNs for continuous data (e.g., images), the vulnerability of graph neural networks (GNNs) for discrete-structured data (e.g., graphs) is largely unexplored, which is highly concerning given their increasing use in security-sensitive domains. To bridge this gap, we present GTA, the first backdoor attack on GNNs. Compared with prior work, GTA departs in significant ways: graph-oriented -- it defines triggers as specific subgraphs, including both topological structures and descriptive features, entailing a large design spectrum for the adversary; input-tailored -- it dynamically adapts triggers to individual graphs, thereby optimizing both attack effectiveness and evasiveness; downstream model-agnostic -- it can be readily launched without knowledge regarding downstream models or fine-tuning strategies; and attack-extensible -- it can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks, constituting severe threats for a range of security-critical applications. Through extensive evaluation using benchmark datasets and state-of-the-art models, we demonstrate the effectiveness of GTA. We further provide analytical justification for its effectiveness and discuss potential countermeasures, pointing to several promising research directions.

Mon Mar 02 2020
Machine Learning
Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies
Deep neural networks (DNNs) can be easily fooled by small perturbation on the input. Adversary can mislead GNNs to give wrong predictions by modifying the graph structure.
0
0
0
Mon Jan 18 2021
Machine Learning
GraphAttacker: A General Multi-Task GraphAttack Framework
Graph Neural Networks (GNNs) have been successfully exploited in graph analysis tasks in many real-world applications. GNNs have been shown to have potential security issues imposed by adversarial samples. We propose GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures.
0
0
0
Wed Feb 12 2020
Machine Learning
Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models
Deep neural networks, while generalizing well, are known to be sensitive to small adversarial perturbations. Bad actors found for one graph model severely compromise other models as well.
1
0
0
Sat Oct 24 2020
Machine Learning
Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
Machine learning models are facing a severe threat called Model Extraction Attacks. Given only black-box access to a target GNN model, the attacker aims to reconstruct a duplicated one via several nodes he obtained. We first systematically formalise the threat modeling.
0
0
0
Fri Feb 22 2019
Machine Learning
Adversarial Attacks on Graph Neural Networks via Meta Learning
Deep learning models for graphs have advanced the state of the art on many tasks. Despite their recent success, little is known about their robustness. Our attacks do not assume any knowledge about or access to the target classifiers.
0
0
0
Mon May 21 2018
Machine Learning
Adversarial Attacks on Neural Networks for Graph Data
Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Can deep learning models for graphs be easily fooled?
0
0
0
Mon Oct 30 2017
Machine Learning
Graph Attention Networks
Graph attention networks (GATs) are novel neural network architectures that operate on graph-structured data. GATs leverage masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions.
5
99
432
Sat Dec 20 2014
Machine Learning
Explaining and Harnessing Adversarial Examples
Machine learning models consistently misclassify adversarial examples. We argue that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results.
8
4
25
Sat Dec 21 2013
Neural Networks
Intriguing properties of neural networks
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions.
2
4
16
Thu Feb 01 2018
Machine Learning
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial attacks. We describe characteristic behaviors of defenses displaying the effect, and for each of the three types we discover, we develop attack techniques to overcome
2
0
2
Fri Sep 09 2016
Machine Learning
Semi-Supervised Classification with Graph Convolutional Networks
We present a scalable approach for semi-supervised learning on graph-structured data. The approach is based on an efficient variant of convolutional neural networks. We motivate the choice of our architecture via a localized first-order approximation of graph convolutions.
3
1
2
Mon Jun 19 2017
Machine Learning
Towards Deep Learning Models Resistant to Adversarial Attacks
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples. The existence of adversarial attacks may be an inherentweakness of deep learning models. To address this problem, we study the robustness of neural networks through the lens of robust optimizing.
3
0
1
Mon Jun 14 2021
Machine Learning
Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Backdoor attacks inject poisoning samples during training, with the goal of forcing a machine-learning model to output an attacker-chosen class. Success of backdoor attacks depends on the complexity of the learning algorithm and the fraction of backdoor samples injected into the training set.
1
3
9
Thu Nov 28 2019
Machine Learning
Towards Security Threats of Deep Learning Systems: A Survey
Deep learning has gained tremendous success and great popularity in the past few years. However, deep learning systems are suffering several inherentweaknesses, which can threaten the security of learning models. We undertake an investigation on attacks towards deep learning, and analyze these attacks to conclude some findings in multiple views.
1
1
2
Sun Aug 01 2021
Computer Vision
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Self-supervised learning in computer vision aims to pre-train an image encoder using a large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can then be used as a feature extractor to build downstream classifiers for many downstream tasks. In this work, we propose BadEncoder, the first backdoor attack to
2
0
1
Thu Apr 08 2021
Machine Learning
Explainability-based Backdoor Attacks Against Graph Neural Networks
0
0
0
Tue Jul 21 2020
Machine Learning
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning. According to the attacker's capability and affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide. The countermeasures are categorized into four general classes.
0
0
0
Fri Jul 17 2020
Machine Learning
Backdoor Learning: A Survey
0
0
0