Published on Thu Apr 08 2021

FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems

Liang Tong, Zhengzhang Chen, Jingchao Ni, Wei Cheng, Dongjin Song, Haifeng Chen, Yevgeniy Vorobeychik
0
0
0
Abstract

We present FACESEC, a framework for fine-grained robustness evaluation of face recognition systems. FACESEC evaluation is performed along four dimensions of adversarial modeling: the nature of perturbation (e.g., pixel-level or face accessories), the attacker's system knowledge (about training data and learning architecture), goals (dodging or impersonation), and capability (tailored to individual inputs or across sets of these). We use FACESEC to study five face recognition systems in both closed-set and open-set settings, and to evaluate the state-of-the-art approach for defending against physically realizable attacks on these. We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks. Moreover, we observe that open-set face recognition systems are more vulnerable than closed-set systems under different types of attacks. The efficacy of attacks for other threat model variations, however, appears highly dependent on both the nature of perturbation and the neural network architecture. For example, attacks that involve adversarial face masks are usually more potent, even against adversarially trained models, and the ArcFace architecture tends to be more robust than the others.

Wed Jul 08 2020
Computer Vision
Delving into the Adversarial Robustness on Face Recognition
0
0
0
Mon Jul 19 2021
Computer Vision
Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition
Face Recognition (FR) systems have the potential to match faces to specific names and identities, creating glaring privacy concerns. Adversarial attacks are a promising way to grant users privacy by disrupting their capability to recognize faces. Yet, such attacks can be perceptible to human observers.
2
2
2
Wed Jul 17 2019
Neural Networks
Robustness properties of Facebook's ResNeXt WSL models
The models, recently made public by Facebook AI, were trained with ~1B images from Instagram and fine-tuned on ImageNet. We show that these models display an unprecedented degree of robustness against common image corruptions andurbations. Remarkably, the ResNeXt WSL models even achieve a limited degree of adversarial robustness.
0
0
0
Sat Nov 30 2019
Machine Learning
Design and Interpretation of Universal Adversarial Patches in Face Detection
We investigate aphenomenon: patches designed to suppress real face detection appear face-like. We propose new optimization-based approaches to automatic design of universal adversarial patches for varying goals of the attack.
0
0
0
Wed Aug 26 2020
Computer Vision
Measurement-driven Security Analysis of Imperceptible Impersonation Attacks
The emergence of Internet of Things (IoT) brings about new security challenges at the intersection of cyber and physical spaces. One prime example is the vulnerability of Face Recognition (FR) based access control.
0
0
0
Sat Oct 27 2018
Artificial Intelligence
Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples
Adversarial sample attacks perturb benign inputs to induce DNN misbehaviors. Existing defense techniques either assume prior knowledge of specific attacks or may not work well on complex models. We propose a novel adversarial sample detection technique for face recognition models, based on interpretability.
0
0
0