Blogs (1) >>
ICSE 2019
Sat 25 - Fri 31 May 2019 Montreal, QC, Canada

Automated build systems are routinely used by software engineers to minimize the number of objects that need to be recompiled after incremental changes to the source files of a project. In order to achieve efficient and correct builds, developers must provide the build tools with dependency information between the files and modules of a project, usually expressed in a macro language specific to each build tool. In order to guarantee correctness, the authors of these tools are responsible for enumerating all the files whose contents an output depends on. Unfortunately, this is a tedious process and not all dependencies are captured in practice, which leads to incorrect builds. We automatically uncover such missing dependencies through a novel method that we call build fuzzing. The correctness of build definitions is verified by modifying files in a project, triggering incremental builds and comparing the set of changed files to the set of expected changes. These sets are determined using a dependency graph inferred by tracing the system calls executed during a clean build. We evaluate our method by exhaustively testing build rules of open-source projects, uncovering issues leading to race conditions and faulty builds in 31 of them. We provide a discussion of the bugs we detect, identifying anti-patterns in the use of the macro languages. We fix some of the issues in projects where the features of build systems allow a clean solution.

Fri 31 May
Times are displayed in time zone: Eastern Time (US & Canada) change

16:00 - 17:20: Testing and Analysis: Domain-Specific ApproachesPapers / Technical Track / Journal-First Papers at Place du Canada
Chair(s): Gregory GayUniversity of South Carolina, Chalmers | University of Gothenburg
16:00 - 16:20
Detecting Incorrect Build RulesArtifacts AvailableACM SIGSOFT Distinguished Paper AwardTechnical Track
Technical Track
Nandor LickerUniversity of Cambridge, Andrew RiceUniversity of Cambridge, UK
Pre-print Media Attached
16:20 - 16:40
Adversarial Sample Detection for Deep Neural Network through Model Mutation TestingTechnical Track
Technical Track
Jingyi WangNational University of Singapore, Singapore, Guoliang DongComputer College of Zhejiang University, Jun SunSingapore Management University, Singapore, Xinyu WangZhejiang University, Peixin ZhangZhejiang University
16:40 - 16:50
Oracles for Testing Software Timeliness with UncertaintyJournal-First
Journal-First Papers
Chunhui WangUniversity of Luxembourg, Fabrizio PastoreUniversity of Luxembourg, Lionel BriandSnT Centre/University of Luxembourg
16:50 - 17:10
Deep Differential Testing of JVM ImplementationsTechnical Track
Technical Track
Yuting ChenShanghai Jiao Tong University, Ting SuNanyang Technological University, Singapore, Zhendong SuETH Zurich
17:10 - 17:20
Discussion Period