Published on Sat Mar 17 2018

A Dual Approach to Scalable Verification of Deep Networks

Krishnamurthy, Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, Pushmeet Kohli

This paper addresses the problem of formally verifying desirable properties of neural networks. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to achieve an upper bound on the worst case violation.

0
0
0
Abstract

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

Mon Feb 25 2019
Machine Learning
Verification of Non-Linear Specifications for Neural Networks
Prior work on neural network verification has focused on specifications that are linear functions of the output of the network. We show that a number of important properties of interest can be modeled within this class, including conservation of energy in a learned dynamics model.
0
0
0
Sun May 06 2018
Machine Learning
Reachability Analysis of Deep Neural Networks with Provable Guarantees
We present a novel algorithm based on adaptive nested optimisation to solve the reachability problem. The novel algorithm has been implemented and evaluated on a range of DNNs, demonstrating its efficiency, scalability and ability to handle a broader class of networks.
0
0
0
Fri Mar 15 2019
Machine Learning
Algorithms for Verifying Deep Neural Networks
Deep neural networks are widely used for nonlinear function approximation. They have applications ranging from computer vision to control. It can be challenging to verify whether a particular network satisfies certain input-output properties. This article surveys methods that have emerged recently for soundly verifying such properties.
0
0
0
Wed Nov 01 2017
Artificial Intelligence
A Unified View of Piecewise Linear Neural Network Verification
Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network(NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models.
0
0
0
Mon Jun 03 2019
Machine Learning
Correctness Verification of Neural Networks
We present the first verification that a neural network produces a correct output within a specified tolerance for every input of interest. We define correctness relative to a specification which identifies a state space and an observation process that produces neural network inputs from the states of the world.
0
0
0
Thu Jun 18 2020
Machine Learning
PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier
Neural Networks (NNs) have increasingly apparent safety implications commensurate with their proliferation in real-world applications. We introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs. Like some other approaches, ours uses a Relaxed convex program to mitigate the combinatorial complexity of the
0
0
0