Published on Sun Jun 13 2021

Security Analysis of Camera-LiDAR Semantic-Level Fusion Against Black-Box Attacks on Autonomous Vehicles

R. Spencer Hallyburton, Yupei Liu, Miroslav Pajic

Autonomous vehicles (AVs) feed sensor data to perception algorithms to understand the environment. Sensor-LiDAR fusion with multi-frame tracking is becoming increasingly popular for detecting 3D objects. Recently, it was shown that LiDAR-based perception built on deep neural networks is vulnerable

1
0
0
Abstract

To enable safe and reliable decision-making, autonomous vehicles (AVs) feed sensor data to perception algorithms to understand the environment. Sensor fusion, and particularly semantic fusion, with multi-frame tracking is becoming increasingly popular for detecting 3D objects. Recently, it was shown that LiDAR-based perception built on deep neural networks is vulnerable to LiDAR spoofing attacks. Thus, in this work, we perform the first analysis of camera-LiDAR fusion under spoofing attacks and the first security analysis of semantic fusion in any AV context. We find first that fusion is more successful than existing defenses at guarding against naive spoofing. However, we then define the frustum attack as a new class of attacks on AVs and find that semantic camera-LiDAR fusion exhibits widespread vulnerability to frustum attacks with between 70% and 90% success against target models. Importantly, the attacker needs less than 20 random spoof points on average for successful attacks - an order of magnitude less than established maximum capability. Finally, we are the first to analyze the longitudinal impact of perception attacks by showing the impact of multi-frame attacks.

Wed Mar 17 2021
Machine Learning
Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection
0
0
0
Tue Jun 30 2020
Machine Learning
Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures
Perception plays a pivotal role in autonomous driving systems. LiDAR-based perception is vulnerable to spoofing attacks. The ignored occlusion patterns in LiDar point clouds make self-driving cars vulnerable.
0
0
0
Sun Jan 17 2021
Machine Learning
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving
2D images have been found to be extremely vulnerable to adversarial attacks. A single adversary can hide different host vehicles from state-of-the-art detectors. Attacks are primarily caused by easily corrupted image features.
0
0
0
Thu Jun 17 2021
Computer Vision
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks
In Autonomous Driving (AD) systems, perception is both security and safety critical. However, AD systems today predominantly adopt a Multi-Sensor Fusion (MSF)based design. We present the first study of security issues of MSF-based perceptions in AD systems.
2
0
0
Tue Jul 16 2019
Machine Learning
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
In Autonomous Vehicles (AVs), one fundamental pillar is perception. We consider LiDAR spoofing attacks as the threat model and set the attack goal as as close to the front of a victim AV as possible. We then explore the possibility of controlling the spoofed attack to fool the machine learning model.
2
8
9
Thu Jun 24 2021
Computer Vision
Multi-Modal 3D Object Detection in Autonomous Driving: a Survey
Self-driving cars are equipped with a suite of sensors to conduct robust and accurate environment perceptions. As the number and type of sensors keep increasing, combining them for better perception is becoming a natural trend. So far, there has been no review that focuses on multi-sensor fusion based perception.
1
0
1
Tue Jul 16 2019
Machine Learning
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
In Autonomous Vehicles (AVs), one fundamental pillar is perception. We consider LiDAR spoofing attacks as the threat model and set the attack goal as as close to the front of a victim AV as possible. We then explore the possibility of controlling the spoofed attack to fool the machine learning model.
2
8
9
Fri Dec 02 2016
Computer Vision
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
Point cloud is an important type of geometric data structure. Most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous. In this paper, we design a novel type of neural network that directly consumes point clouds.
1
0
1
Thu Apr 30 2015
Computer Vision
Fast R-CNN
Fast Region-based Convolutional Network method (FastR-CNN) for object detection. Fast R-CNN trains the very deep VGG16 network 9x faster and is 213x faster at test-time. It also tests 10x faster, and is more accurate.
0
0
0
Tue Dec 11 2018
Computer Vision
PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud
In this paper, we propose PointRCNN for 3D object detection from raw point cloud. The whole framework is composed of two stages: stage-1 for the bottom-up3D proposal generation and stage-2 for refining proposals in the canonical coordinates. Extensive experiments on the metrics
0
0
0
Mon Feb 08 2016
Machine Learning
Practical Black-Box Attacks against Machine Learning
Machine learning (ML) models are vulnerable to adversarial examples. malicious inputs modified to yield erroneous model outputs. All existing adversarial example attacks require knowledge of the model internals. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge.
0
0
0
Tue Jun 23 2020
Computer Vision
Towards Robust Sensor Fusion in Visual Perception
We study the problem of robust sensor fusion in visual perception, especially under the autonomous driving settings. We evaluate the robustness of RGB camera and LiDAR sensor fusion for binary classification and object detection.
0
0
0