Parallelizable Reachability Analysis Algorithms for Feed-Forward Neural Networks
Artificial neural networks (ANN) have displayed considerable utility in a wide range of applications such as image processing, character and pattern recognition, self-driving cars, evolutionary robotics, and non-linear system identification and control. While ANNs are able to carry out complicated tasks efficiently, they are susceptible to unpredictable and errant behavior due to irregularities that emanate from their complex non-linear structure. As a result, there have been reservations about incorporating them into safety-critical systems. In this paper, we propose a reachability analysis method for feed- forward neural networks (FNN) that employ rectified linear units (ReLUs) as activation functions. The crux of our approach relies on three reachable-set computation algorithms, namely exact schemes, lazy-approximate schemes, and mixing schemes. The exact scheme computes an exact reachable set for an FNN, while the lazy-approximate and mixing schemes generate an over-approximation of the exact reachable set. All schemes are designed efficiently to run on parallel platforms to reduce the computation time and enhance the scalability. Our methods are implemented in a MATLAB toolbox called, NNV, and is evaluated using a set of benchmarks that consist of realistic neural networks with sizes that range from tens to a thousand neurons. Notably, NNV successfully computes and visualizes the exact reachable sets represented as a union of tens to hundreds of polyhedra, of the real-world ACAS Xu deep neural networks (DNNs) in the new generation of Airborne Collision Avoidance System X.
Mon 27 May
|14:00 - 14:25|
|14:25 - 14:40|
|14:40 - 15:05|
Verifying Channel Communication Correctness for a Multi-Core Cooperatively Scheduled Runtime Using CSP
|15:05 - 15:30|