CRADLE: Cross-Backend Validation to Detect and Localize Bugs in Deep Learning LibrariesTechnical Track
Deep learning (DL) systems are widely used in domains including aircraft collision avoidance systems, Alzheimer’s disease diagnosis, and autonomous driving cars. Despite the requirement for high reliability, DL systems are difficult to test. Existing DL testing work focuses on testing the DL models, not the implementations (e.g., DL software libraries) of the models. One key challenge of testing DL libraries is the difficulty of knowing the expected output of DL libraries given an input instance. Fortunately, there are multiple implementations of the same DL algorithms in different DL libraries. Thus, we propose CRADLE, a new approach that focuses on finding and localizing bugs in DL software libraries. CRADLE (1) performs cross-implementation inconsistency checking to detect bugs in DL libraries, and (2) leverages anomaly propagation tracking and analysis to localize faulty functions in DL libraries that cause the bugs. We evaluate CRADLE on three libraries (TensorFlow, CNTK, and Theano), 11 datasets (including ImageNet, MNIST, and KGS Go game), and 30 pre-trained models. CRADLE detects 12 bugs and 104 unique inconsistencies, and highlights functions relevant to the causes of inconsistencies for all 104 unique inconsistencies.
Fri 31 MayDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | Testing of AI SystemsNew Ideas and Emerging Results / Demonstrations / Technical Track at Place du Canada Chair(s): Marija Mikic Google | ||
14:00 20mTalk | CRADLE: Cross-Backend Validation to Detect and Localize Bugs in Deep Learning LibrariesTechnical Track Technical Track Hung Viet Pham University of Waterloo, Thibaud Lutellier , Weizhen Qi University of Science and Technology of China, Lin Tan Purdue University Pre-print | ||
14:20 20mTalk | Guiding Deep Learning System Testing using Surprise AdequacyTechnical Track Technical Track Jinhan Kim KAIST, Robert Feldt Chalmers University of Technology, Shin Yoo Korea Advanced Institute of Science and Technology Authorizer link Pre-print | ||
14:40 20mTalk | DeepConcolic: Testing and Debugging Deep Neural NetworksDemos Demonstrations Youcheng Sun University of Oxford, Xiaowei Huang University of Liverpool, Daniel Kroening University of Oxford, James Sharp Defence Science and Technology Laboratory (Dstl), Matthew Hill Defence Science and Technology Laboratory (Dstl), Rob Ashmore Defence Science and Technology Laboratory (Dstl) | ||
15:00 10mTalk | Towards Improved Testing For Deep LearningNIER New Ideas and Emerging Results Pre-print | ||
15:10 10mTalk | Structural Coverage Criteria for Neural Networks Could Be MisleadingNIER New Ideas and Emerging Results Zenan Li Nanjing University, Xiaoxing Ma Nanjing University, Chang Xu Nanjing University, Chun Cao Nanjing University Pre-print | ||
15:20 10mTalk | Robustness of Neural Networks: A Probabilistic and Practical PerspectiveNIER New Ideas and Emerging Results |