Blogs (1) >>
ICSE 2019
Sat 25 - Fri 31 May 2019 Montreal, QC, Canada
Fri 31 May 2019 15:00 - 15:10 at Place du Canada - Testing of AI Systems Chair(s): Marija Mikic

The growing use of deep neural networks in safety-critical applications makes it necessary to carry out adequate testing to detect and correct any incorrect behavior for corner case inputs before they can be actually used. Deep neural networks lack an explicit control-flow structure, making it impossible to apply traditional software testing criteria such as code coverage to them. In this paper, we examine existing testing methods for deep neural networks, the opportunities for improvement and the need for a fast, scalable, generalizable end-to-end testing method for deep neural networks. We also propose a coverage criterion for deep neural networks that tries to capture all possible parts of the deep neural network’s logic.

Fri 31 May

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
14:00
20m
Talk
CRADLE: Cross-Backend Validation to Detect and Localize Bugs in Deep Learning LibrariesTechnical Track
Technical Track
Hung Viet Pham University of Waterloo, Thibaud Lutellier , Weizhen Qi University of Science and Technology of China, Lin Tan Purdue University
Pre-print
14:20
20m
Talk
Guiding Deep Learning System Testing using Surprise AdequacyArtifacts AvailableArtifacts Evaluated ReusableResults ReproducedTechnical Track
Technical Track
Jinhan Kim KAIST, Robert Feldt Chalmers University of Technology, Shin Yoo Korea Advanced Institute of Science and Technology
Authorizer link Pre-print
14:40
20m
Talk
DeepConcolic: Testing and Debugging Deep Neural NetworksDemos
Demonstrations
Youcheng Sun University of Oxford, Xiaowei Huang University of Liverpool, Daniel Kroening University of Oxford, James Sharp Defence Science and Technology Laboratory (Dstl), Matthew Hill Defence Science and Technology Laboratory (Dstl), Rob Ashmore Defence Science and Technology Laboratory (Dstl)
15:00
10m
Talk
Towards Improved Testing For Deep LearningNIER
New Ideas and Emerging Results
Jasmine Sekhon University of Virginia, Cody Fleming University of Virginia
Pre-print
15:10
10m
Talk
Structural Coverage Criteria for Neural Networks Could Be MisleadingNIER
New Ideas and Emerging Results
Zenan Li Nanjing University, Xiaoxing Ma Nanjing University, Chang Xu Nanjing University, Chun Cao Nanjing University
Pre-print
15:20
10m
Talk
Robustness of Neural Networks: A Probabilistic and Practical PerspectiveNIER
New Ideas and Emerging Results
Ravi Mangal Georgia Institute of Technology, Aditya Nori , Alessandro Orso Georgia Tech