Blogs (1) >>
ICSE 2019
Sat 25 - Fri 31 May 2019 Montreal, QC, Canada
Fri 31 May 2019 15:20 - 15:30 at Place du Canada - Testing of AI Systems Chair(s): Marija Mikic

Neural networks are becoming increasingly prevalent in software, and it is therefore important to be able to verify their behavior. Because verifying the correctness of neural networks is extremely challenging, it is common to focus on the verification of other properties of these systems. One important property, in particular, is robustness. Most existing definitions of robustness, however, are at one of two extremes:practical definitions, whose notion of robustness is too weak to be useful, or (too) strong definitions that are unlikely to be satisfied by—and verifiable for—practical neural networks. To strike a balance between these two extremes, we propose a novel notion of robustness: probabilistic robustness. Given a probability distribution over the inputs to a neural network, probabilistic robustness requires the neural network to be robust with at least (1 − \epsilon) probability, This probabilistic approach is practical and provides a principled way of estimating the robustness of a neural network. We also present an algorithm, based on abstract interpretation and importance sampling, for checking whether a neural network is probabilistically robust. Our algorithm uses abstract interpretation to approximate the behavior of a neural network and compute an overapproximation of the input regions that violate robustness. It then uses importance sampling to counter the effect of such overapproximation and compute an accurate estimate of the probability that the neural network violates the robustness property.

Fri 31 May

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
14:00
20m
Talk
CRADLE: Cross-Backend Validation to Detect and Localize Bugs in Deep Learning LibrariesTechnical Track
Technical Track
Hung Viet Pham University of Waterloo, Thibaud Lutellier , Weizhen Qi University of Science and Technology of China, Lin Tan Purdue University
Pre-print
14:20
20m
Talk
Guiding Deep Learning System Testing using Surprise AdequacyArtifacts AvailableArtifacts Evaluated ReusableResults ReproducedTechnical Track
Technical Track
Jinhan Kim KAIST, Robert Feldt Chalmers University of Technology, Shin Yoo Korea Advanced Institute of Science and Technology
Authorizer link Pre-print
14:40
20m
Talk
DeepConcolic: Testing and Debugging Deep Neural NetworksDemos
Demonstrations
Youcheng Sun University of Oxford, Xiaowei Huang University of Liverpool, Daniel Kroening University of Oxford, James Sharp Defence Science and Technology Laboratory (Dstl), Matthew Hill Defence Science and Technology Laboratory (Dstl), Rob Ashmore Defence Science and Technology Laboratory (Dstl)
15:00
10m
Talk
Towards Improved Testing For Deep LearningNIER
New Ideas and Emerging Results
Jasmine Sekhon University of Virginia, Cody Fleming University of Virginia
Pre-print
15:10
10m
Talk
Structural Coverage Criteria for Neural Networks Could Be MisleadingNIER
New Ideas and Emerging Results
Zenan Li Nanjing University, Xiaoxing Ma Nanjing University, Chang Xu Nanjing University, Chun Cao Nanjing University
Pre-print
15:20
10m
Talk
Robustness of Neural Networks: A Probabilistic and Practical PerspectiveNIER
New Ideas and Emerging Results
Ravi Mangal Georgia Institute of Technology, Aditya Nori , Alessandro Orso Georgia Tech