Blogs (1) >>
ICSE 2019
Sat 25 - Fri 31 May 2019 Montreal, QC, Canada
Fri 31 May 2019 14:20 - 14:40 at Place du Canada - Testing of AI Systems Chair(s): Marija Mikic

Deep Learning (DL) systems are rapidly being adopted in safety and security critical domains, urgently calling for ways to test their correctness and robustness. Testing of DL systems has traditionally relied on manual collection and labelling of data. Recently, a number of coverage criteria based on neuron activation values have been proposed. These criteria essentially count the number of neurons whose activation during the execution of a DL system satisfied certain properties, such as being above predefined thresholds. However, existing coverage criteria are not sufficiently fine grained to capture subtle behaviours exhibited by DL systems. Moreover, evaluations have focused on showing correlation between adversarial examples and proposed criteria rather than evaluating and guiding their use for actual testing of DL systems. We propose a novel test adequacy criterion for testing of DL systems, called Surprise Adequacy for Deep Learning Systems (SADL), which is based on the behaviour of DL systems with respect to their training data. We measure the surprise of an input as the difference in DL system’s behaviour between the input and the training data (i.e., what was learnt during training), and subsequently develop this as an adequacy criterion: a good test input should be sufficiently but not overtly surprising compared to training data. Empirical evaluation using a range of DL systems from simple image classifiers to autonomous driving car platforms shows that systematic sampling of inputs based on their surprise can improve classification accuracy of DL systems against adversarial examples by up to 77.5% via retraining.

Fri 31 May

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
14:00
20m
Talk
CRADLE: Cross-Backend Validation to Detect and Localize Bugs in Deep Learning LibrariesTechnical Track
Technical Track
Hung Viet Pham University of Waterloo, Thibaud Lutellier , Weizhen Qi University of Science and Technology of China, Lin Tan Purdue University
Pre-print
14:20
20m
Talk
Guiding Deep Learning System Testing using Surprise AdequacyArtifacts AvailableArtifacts Evaluated ReusableResults ReproducedTechnical Track
Technical Track
Jinhan Kim KAIST, Robert Feldt Chalmers University of Technology, Shin Yoo Korea Advanced Institute of Science and Technology
Authorizer link Pre-print
14:40
20m
Talk
DeepConcolic: Testing and Debugging Deep Neural NetworksDemos
Demonstrations
Youcheng Sun University of Oxford, Xiaowei Huang University of Liverpool, Daniel Kroening University of Oxford, James Sharp Defence Science and Technology Laboratory (Dstl), Matthew Hill Defence Science and Technology Laboratory (Dstl), Rob Ashmore Defence Science and Technology Laboratory (Dstl)
15:00
10m
Talk
Towards Improved Testing For Deep LearningNIER
New Ideas and Emerging Results
Jasmine Sekhon University of Virginia, Cody Fleming University of Virginia
Pre-print
15:10
10m
Talk
Structural Coverage Criteria for Neural Networks Could Be MisleadingNIER
New Ideas and Emerging Results
Zenan Li Nanjing University, Xiaoxing Ma Nanjing University, Chang Xu Nanjing University, Chun Cao Nanjing University
Pre-print
15:20
10m
Talk
Robustness of Neural Networks: A Probabilistic and Practical PerspectiveNIER
New Ideas and Emerging Results
Ravi Mangal Georgia Institute of Technology, Aditya Nori , Alessandro Orso Georgia Tech