Robustness of Neural Networks: A Probabilistic and Practical PerspectiveNIER
Neural networks are becoming increasingly prevalent in software, and it is therefore important to be able to verify their behavior. Because verifying the correctness of neural networks is extremely challenging, it is common to focus on the verification of other properties of these systems. One important property, in particular, is robustness. Most existing definitions of robustness, however, are at one of two extremes:practical definitions, whose notion of robustness is too weak to be useful, or (too) strong definitions that are unlikely to be satisfied by—and verifiable for—practical neural networks. To strike a balance between these two extremes, we propose a novel notion of robustness: probabilistic robustness. Given a probability distribution over the inputs to a neural network, probabilistic robustness requires the neural network to be robust with at least (1 − \epsilon) probability, This probabilistic approach is practical and provides a principled way of estimating the robustness of a neural network. We also present an algorithm, based on abstract interpretation and importance sampling, for checking whether a neural network is probabilistically robust. Our algorithm uses abstract interpretation to approximate the behavior of a neural network and compute an overapproximation of the input regions that violate robustness. It then uses importance sampling to counter the effect of such overapproximation and compute an accurate estimate of the probability that the neural network violates the robustness property.
Fri 31 MayDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | Testing of AI SystemsNew Ideas and Emerging Results / Demonstrations / Technical Track at Place du Canada Chair(s): Marija Mikic Google | ||
14:00 20mTalk | CRADLE: Cross-Backend Validation to Detect and Localize Bugs in Deep Learning LibrariesTechnical Track Technical Track Hung Viet Pham University of Waterloo, Thibaud Lutellier , Weizhen Qi University of Science and Technology of China, Lin Tan Purdue University Pre-print | ||
14:20 20mTalk | Guiding Deep Learning System Testing using Surprise AdequacyTechnical Track Technical Track Jinhan Kim KAIST, Robert Feldt Chalmers University of Technology, Shin Yoo Korea Advanced Institute of Science and Technology Authorizer link Pre-print | ||
14:40 20mTalk | DeepConcolic: Testing and Debugging Deep Neural NetworksDemos Demonstrations Youcheng Sun University of Oxford, Xiaowei Huang University of Liverpool, Daniel Kroening University of Oxford, James Sharp Defence Science and Technology Laboratory (Dstl), Matthew Hill Defence Science and Technology Laboratory (Dstl), Rob Ashmore Defence Science and Technology Laboratory (Dstl) | ||
15:00 10mTalk | Towards Improved Testing For Deep LearningNIER New Ideas and Emerging Results Pre-print | ||
15:10 10mTalk | Structural Coverage Criteria for Neural Networks Could Be MisleadingNIER New Ideas and Emerging Results Zenan Li Nanjing University, Xiaoxing Ma Nanjing University, Chang Xu Nanjing University, Chun Cao Nanjing University Pre-print | ||
15:20 10mTalk | Robustness of Neural Networks: A Probabilistic and Practical PerspectiveNIER New Ideas and Emerging Results |