Neural Network Verification focuses on gaining formal guarantees of alignment properties (such as safety, ethicality or cooperation), which are the strongest forms of assurance.
Formal Frameworks
Recommended Papers List
Ai2: Safety and robustness certification of neural networks with abstract interpretation
Click to have a preview.
We present AI 2 , the first sound and scalable analyzer for deep neural networks. Based on overapproximation, AI 2 can automatically prove safety properties (e.g., robustness) of realistic neural networks (e.g., convolutional neural networks). The key insight behind AI 2 is to phrase reasoning about safety and robustness of neural networks in terms of classic abstract interpretation, enabling us to leverage decades of advances in that area. Concretely, we introduce abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers. This allows us to handle real-world neural networks, which are often built out of those types of layers. We present a complete implementation of AI together with an extensive evaluation on 20 neural networks. Our results demonstrate that: (i) AI is precise enough to prove useful specifications (e.g., robustness), (ii) AI can be used to certify the effectiveness of state-of-the-art defenses for neural networks, (iii) AI is significantly faster than existing analyzers based on symbolic analysis, which often take hours to verify simple fully connected networks, and (iv) AI can handle deep convolutional networks, which are beyond the reach of existing methods.
An abstract domain for certifying neural networks
Click to have a preview.
We present a novel method for scalable and precise certification of deep neural networks. The key technical insight behind our approach is a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the setting of neural networks. Concretely, we introduce new transformers for affine transforms, the rectified linear unit (ReLU), sigmoid, tanh, and maxpool functions.
Differentiable abstract interpretation for provably robust neural networks
Click to have a preview.
We introduce a scalable method for training robust neural networks based on abstract interpretation. We present several abstract transformers which balance efficiency with precision and show these can be used to train large neural networks that are certifiably robust to adversarial perturbations.
Fast and effective robustness certification
Click to have a preview.
We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation. Compared to state-of-the-art automated verifiers for neural networks, DeepZ: (i) handles ReLU, Tanh and Sigmoid activation functions, (ii) supports feedforward and convolutional architectures, (iii) is significantly more scalable and precise, and (iv) and is sound with respect to floating point arithmetic. These benefits are due to carefully designed approximations tailored to the setting of neural networks. As an example, DeepZ achieves a verification accuracy of 97% on a large network with 88,500 hidden units under $L_\infty$ attack with $\epsilon=0.1$ with an average runtime of 133 seconds.
The fundamental limits of interval arithmetic for neural networks
Click to have a preview.
Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning. However, despite substantial efforts, progress on addressing this key challenge has stagnated, calling into question whether interval arithmetic is a viable path forward. In this paper we present two fundamental results on the limitations of interval arithmetic for analyzing neural networks. Our main impossibility theorem states that for any neural network classifying just three points, there is a valid specification over these points that interval analysis can not prove. Further, in the restricted case of one-hidden-layer neural networks we show a stronger impossibility result: given any radius , there is a set of points with robust radius , separated by distance , that no one-hidden-layer network can be proven to classify robustly via interval analysis.
Methods Based on MILP and SMT
Recommended Papers List
A unified view of piecewise linear neural network verification
Click to have a preview.
The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure and taking insights from formal methods such as Satisifiability Modulo Theory. These methods are however still far from scaling to realistic neural networks. To facilitate progress on this crucial area, we make two key contributions. First, we present a unified framework that encompasses previous methods. This analysis results in the identification of new methods that combine the strengths of multiple existing approaches, accomplishing a speedup of two orders of magnitude compared to the previous state of the art. Second, we propose a new data set of benchmarks which includes a collection of previously released testcases. We use the benchmark to provide the first experimental comparison of existing algorithms and identify the factors impacting the hardness of verification problems.
An approach to reachability analysis for feed-forward relu neural networks
Click to have a preview.
We study the reachability problem for systems implemented as feed-forward neural networks whose activation function is implemented via ReLU functions. We draw a correspondence between establishing whether some arbitrary output can ever be outputed by a neural system and linear problems characterising a neural system of interest. We present a methodology to solve cases of practical interest by means of a state-of-the-art linear programs solver. We evaluate the technique presented by discussing the experimental results obtained by analysing reachability properties for a number of benchmarks in the literature.
An SMT-based approach for verifying binarized neural networks
Click to have a preview.
Deep learning has emerged as an effective approach for creating modern software systems, with neural networks often surpassing hand-crafted systems. Unfortunately, neural networks are known to suffer from various safety and security issues. Formal verification is a promising avenue for tackling this difficulty, by formally certifying that networks are correct. We propose an SMT-based technique for verifying binarized neural networks — a popular kind of neural network, where some weights have been binarized in order to render the neural network more memory and energy efficient, and quicker to evaluate. One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components. Neural network verification is computationally very difficult, and so we propose here various optimizations, integrated into our SMT procedure as deduction steps, as well as an approach for parallelizing verification queries. We implement our technique as an extension to the Marabou framework, and use it to evaluate the approach on popular binarized neural network architectures.
Efficient verification of relu-based neural networks via dependency analysis
Click to have a preview.
We introduce an efficient method for the verification of ReLU-based feed-forward neural networks. We derive an automated procedure that exploits dependency relations between the ReLU nodes, thereby pruning the search tree that needs to be considered by MILP-based formulations of the verification problem. We augment the resulting algorithm with methods for input domain splitting and symbolic interval propagation. We present Venus, the resulting verification toolkit, and evaluate it on the ACAS collision avoidance networks and models trained on the MNIST and CIFAR-10 datasets. The experimental results obtained indicate considerable gains over the present state-of-the-art tools.
Experiments in mixed-integer linear programming
Click to have a preview.
This paper presents a “branch and bound” method for solving mixed integer linear programming problems. After briefly discussing the bases of the method, new concepts called pseudo-costs and estimations are introduced. Then, the heuristic rules for generating the tree, which are the main features of the method, are presented. Numerous parameters allow the user for adjusting the search strategy to a given problem.
This method has been implemented in the IBM Extended Mathematical Programming System in order to solve large mixed integer L. P. problems. Numerical results making comparisons between different choices of rules are provided and discussed.
Safety verification of deep neural networks
Click to have a preview.
Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and/or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness.
Formal analysis of neural network-based systems in the aircraft domain
Click to have a preview.
Neural networks are being increasingly used for efficient decision making in the aircraft domain. Given the safety-critical nature of the applications involved, stringent safety requirements must be met by these networks. In this work we present a formal study of two neural network-based systems developed by Boeing. The Venus verifier is used to analyse the conditions under which these systems can operate safely, or generate counterexamples that show when safety cannot be guaranteed. Our results confirm the applicability of formal verification to the settings considered.
Formal verification of piece-wise linear feed-forward neural networks
Click to have a preview.
We present an approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function. Such networks are often used in deep learning and have been shown to be hard to verify for modern satisfiability modulo theory (SMT) and integer linear programming (ILP) solvers. The starting point of our approach is the addition of a global linear approximation of the overall network behavior to the verification problem that helps with SMT-like reasoning over the network behavior. We present a specialized verification algorithm that employs this approximation in a search process in which it infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving. We also show how to infer additional conflict clauses and safe node fixtures from the results of the analysis steps performed during the search. The resulting approach is evaluated on collision avoidance and handwritten digit recognition case studies.
Incremental Satisfiability Modulo Theory for Verification of Deep Neural Networks
Click to have a preview.
Constraint solving is an elementary way for verification of deep neural networks (DNN). In the domain of AI safety, a DNN might be modified in its structure and parameters for its repair or attack. For such situations, we propose the incremental DNN verification problem, which asks whether a safety property still holds after the DNN is modified. To solve the problem, we present an incremental satisfiability modulo theory (SMT) algorithm based on the Reluplex framework. We simulate the most important features of the configurations that infers the verification result of the searching branches in the old solving procedure (with respect to the original network), and heuristically check whether the proofs are still valid for the modified DNN. We implement our algorithm as an incremental solver called DeepInc, and exerimental results show that DeepInc is more efficient in most cases. For the cases that the property holds both before and after modification, the acceleration can be faster by several orders of magnitude, showing that DeepInc is outstanding in incrementally searching for counterexamples. Moreover, based on the framework, we propose the multi-objective DNN repair problem and give an algorithm based on our incremental SMT solving algorithm. Our repair method preserves more potential safety properties on the repaired DNNs compared with state-of-the-art.
OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks
Click to have a preview.
Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs). It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors. Therefore, DNNs planted in safety-critical systems should be verified to be robust against occlusions prior to deployment. However, most existing robustness verification approaches for DNNs are focused on non-semantic perturbations and are not suited to the occlusion case. In this paper, we propose the first efficient, SMT-based approach for formally verifying the occlusion robustness of DNNs. We formulate the occlusion robustness verification problem and prove it is NP-complete. Then, we devise a novel approach for encoding occlusions as a part of neural networks and introduce two acceleration techniques so that the extended neural networks can be efficiently verified using off-the-shelf, SMT-based neural network verification tools. We implement our approach in a prototype called and extensively evaluate its performance on benchmark datasets with various occlusion variants. The experimental results demonstrate our approach’s effectiveness and efficiency in verifying DNNs’ robustness against various occlusions, and its ability to generate counterexamples when these DNNs are not robust.
Reachability analysis for neural agent-environment systems
Click to have a preview.
We develop a novel model for studying agent-environment systems, where the agents are implemented via feed-forward ReLU neural networks. We provide a semantics and develop a method to verify automatically that no unwanted states are reached by the system during its evolution. We study several reachability problems for the system, ranging from one-step reachability, to fixed multi-step and arbitrary-step to study the system evolution. We also study the decision problem of whether an agent, realised via feed-forward ReLU networks will perform an action in a system run. Whenever possible, we give tight complexity bounds to decision problems introduced. We automate the various reachability problems studied by recasting them as mixed-integer linear programming problems. We present an implementation and discuss the experimental results obtained on a range of test cases.
Reluplex: An efficient SMT solver for verifying deep neural networks
Click to have a preview.
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
Satisfiability modulo theories: introduction and applications
Click to have a preview.
Checking the satisfiability of logical formulas, SMT solvers scale orders of magnitude beyond custom ad hoc solvers.
Satisfiability modulo theories
Click to have a preview.
Satisfiability Modulo Theories (SMT) refers to the problem of determining whether a first-order formula is satisfiable with respect to some logical theory. Solvers based on SMT are used as back-end engines in model-checking applications such as bounded, interpolation-based, and predicate-abstraction-based model checking. After a brief illustration of these uses, we survey the predominant techniques for solving SMT problems with an emphasis on the lazy approach, in which a propositional satisfiability (SAT) solver is combined with one or more theory solvers. We discuss the architecture of a lazy SMT solver, give examples of theory solvers, show how to combine such solvers modularly, and mention several extensions of the lazy approach. We also briefly describe the eager approach in which the SMT problem is reduced to a SAT problem. Finally, we discuss how the basic framework for determining satisfiability can be extended with additional functionality such as producing models, proofs, unsatisfiable cores, and interpolants.
The marabou framework for verification and analysis of deep neural networks
Click to have a preview.
Deep neural networks are revolutionizing the way complex systems are designed. Consequently, there is a pressing need for tools and techniques for network analysis and certification. To help in addressing that need, we present Marabou, a framework for verifying deep neural networks. Marabou is an SMT-based tool that can answer queries about a network’s properties by transforming these queries into constraint satisfaction problems. It can accommodate networks with different activation functions and topologies, and it performs high-level reasoning on the network that can curtail the search space and improve performance. It also supports parallel execution to further enhance scalability. Marabou accepts multiple input formats, including protocol buffer files generated by the popular TensorFlow framework for neural networks. We describe the system architecture and main components, evaluate the technique and discuss ongoing work.
Extensions to Agent-environment Interactions and Multi-agent Systems
Recommended Papers List
Framework for Formal Verification of Machine Learning Based Complex System-of-Systems
Click to have a preview.
A complex system is characterized by emergence of global properties which are very difficult, if not impossible, to anticipate just from complete knowledge of component behaviors. Emergence, hierarchical organization, and numerosity are some of the characteristics of complex systems. Recently, there has been an exponential increase on the adoption of various neural network-based machine learning models to govern the functionality and behavior of systems. With this increasing system complexity, achieving confidence in systems becomes even more difficult. Further, ease of interconnectivity among systems is permeating numerous system-of-systems, wherein multiple independent systems are expected to interact and collaborate to achieve unparalleled levels of functionality. Traditional verification and validation approaches are often inadequate to bring in the nuances of potential emergent behavior in a system-of-systems, which may be positive or negative. This paper describes a novel approach towards application of machine learning based classifiers and formal methods for analyzing and evaluating emergent behavior of complex system-of-systems that comprise a hybrid of constituent systems governed by conventional models and machine learning models. The proposed approach involves developing a machine learning classifier model that learns on potential negative and positive emergent behaviors, and predicts the behavior exhibited. A formal verification model is then developed to assert negative emergent behavior. The approach is illustrated through the case of a swarm of autonomous UAVs flying in a formation, and dynamically changing the shape of the formation, to support varying mission scenarios. The effectiveness and performance of the approach are quantified.
Reachability analysis for neural agent-environment systems
Click to have a preview.
We develop a novel model for studying agent-environment systems, where the agents are implemented via feed-forward ReLU neural networks. We provide a semantics and develop a method to verify automatically that no unwanted states are reached by the system during its evolution. We study several reachability problems for the system, ranging from one-step reachability, to fixed multi-step and arbitrary-step to study the system evolution. We also study the decision problem of whether an agent, realised via feed-forward ReLU networks will perform an action in a system run. Whenever possible, we give tight complexity bounds to decision problems introduced. We automate the various reachability problems studied by recasting them as mixed-integer linear programming problems. We present an implementation and discuss the experimental results obtained on a range of test cases.
Towards Formal Verification of Neuro-symbolic Multi-agent Systems
Click to have a preview.
This paper outlines some of the key methods we developed towards the formal verification of multiagent systems, covering both symbolic and connectionist systems. It discusses logic-based methods for the verification of unbounded multi-agent systems (ie, multi-agent systems composed of an arbitrary number of homogeneous agents), optimisation approaches for establishing the robustness of neural network models, and mixed-integer linear programming methods for analysing properties of neuro-symbolic multi-agent systems.
Verification of RNN-based neural agent-environment systems
Click to have a preview.
We introduce agent-environment systems where the agent is stateful and executing a ReLU recurrent neural network. We define and study their verification problem by providing equivalences of recurrent and feed-forward neural networks on bounded execution traces. We give a sound and complete procedure for their verification against properties specified in a simplified version of LTL on bounded executions. We present an implementation and discuss the experimental results obtained.