Blogs (1) >>
ICSE 2019
Sat 25 - Fri 31 May 2019 Montreal, QC, Canada

JVM is the cornerstone of the widely-used Java platform. Thus, it is critical to ensure the reliability and robustness of popular JVM implementations. However, little research exists on validating production JVMs. One notable effort is classfuzz, which mutates Java bytecode syntactically to stress-test different JVMs. It is shown that classfuzz mainly produces illegal bytecode files and uncovers defects in JVMs’ startup processes. It remains a significant challenge to effectively test JVMs’ bytecode verifiers and execution engines to expose deeper bugs. This paper tackles this challenge by introducing classming, a novel, effective approach to performing deep, differential JVM testing. The key of classming is a technique, live bytecode mutation, to generate, from a seed bytecode file f , likely valid, executable (live) bytecode files: (1) capture the seed f ’s live bytecode; (2) repeatedly manipulate the control- and data-flow in f ’s live bytecode to generate semantically different variants; and (3) selectively accept the generated variants to steer the mutation process toward live, diverse variants. The generated variants are then employed to differentially test JVMs. We have evaluated classming on mainstream JVM implementations, including OpenJDK’s HotSpot and IBM’s J9, by mutating the DaCapo benchmarks. Our results show that classming is very effective in uncovering deep JVM differences. More than 1,800 of the generated classes exposed JVM differences, and more than 30 triggered JVM crashes. We analyzed and reported the JVM runtime differences and crashes, of which 14 have already been confirmed/fixed, including a highly critical security vulnerability (CVE-2017-1376).

Fri 31 May

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 17:20
Testing and Analysis: Domain-Specific ApproachesTechnical Track / Journal-First Papers / Papers at Place du Canada
Chair(s): Gregory Gay University of South Carolina, Chalmers | University of Gothenburg
16:00
20m
Talk
Detecting Incorrect Build RulesArtifacts AvailableACM SIGSOFT Distinguished Paper AwardTechnical Track
Technical Track
Nandor Licker University of Cambridge, Andrew Rice University of Cambridge, UK
Pre-print Media Attached
16:20
20m
Talk
Adversarial Sample Detection for Deep Neural Network through Model Mutation TestingTechnical Track
Technical Track
Jingyi Wang National University of Singapore, Singapore, Guoliang Dong Computer College of Zhejiang University, Jun Sun Singapore Management University, Singapore, Xinyu Wang Zhejiang University, Peixin Zhang Zhejiang University
16:40
10m
Talk
Oracles for Testing Software Timeliness with UncertaintyJournal-First
Journal-First Papers
Chunhui Wang University of Luxembourg, Fabrizio Pastore University of Luxembourg, Lionel Briand SnT Centre/University of Luxembourg
16:50
20m
Talk
Deep Differential Testing of JVM ImplementationsTechnical Track
Technical Track
Yuting Chen Shanghai Jiao Tong University, Ting Su Nanyang Technological University, Singapore, Zhendong Su ETH Zurich
17:10
10m
Talk
Discussion Period
Papers