Blogs (1) >>
ICSE 2019
Sat 25 - Fri 31 May 2019 Montreal, QC, Canada
Wed 29 May 2019 12:00 - 12:10 at Duluth - Testing Effectiveness Chair(s): Diomidis Spinellis

As software engineering researchers we already understand how to make testing more effective and efficient at finding bugs. However, as fuzzing (here, short for automated software testing) becomes more widely adopted in practice, practitioners are asking: Which assurances does a fuzzing campaign provide that exposes no bugs? When is it safe to stop the fuzzer with a reasonable residual risk? How much longer should the fuzzer be run to achieve sufficient coverage?

It is time for us to move beyond the innovation of increasingly sophisticated testing techniques, to build a body of knowledge around the explication and quantification of the testing process, and to develop sound methodologies to estimate and extrapolate these quantities with measurable accuracy. In our vision of the future practitioners leverage a rich statistical toolset to assess residual risk, to obtain statistical guarantees, and to analyze the cost-benefit trade-off for ongoing fuzzing campaigns. We propose a general framework as a first starting point to tackle this fundamental challenge and discuss a large number of concrete opportunities for future research.

Wed 29 May

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
Testing EffectivenessJournal-First Papers / Software Engineering in Practice / Papers / New Ideas and Emerging Results at Duluth
Chair(s): Diomidis Spinellis Athens University of Economics and Business
11:00
20m
Talk
Practitioners' Views on Good Software Testing PracticesSEIPIndustry Program
Software Engineering in Practice
Pavneet Singh Kochhar Microsoft, Xin Xia Monash University, David Lo Singapore Management University
11:20
20m
Talk
Perception and Practices of Differential TestingSEIPIndustry Program
Software Engineering in Practice
Muhammad Ali Gulzar University of California, Los Angeles, Yongkang Zhu Google, Xiaofeng Han Google
11:40
10m
Talk
An interleaving approach to combinatorial testing and failure-inducing interaction identificationIndustry ProgramJournal-First
Journal-First Papers
Xintao Niu , Changhai Nie , Hareton Leung , Yu Lei , Xiaoyin Wang University of Texas at San Antonio, USA, Jiaxi Xu School of Information Engineering, Nanjing Xiaozhuang University, Yan Wang
11:50
10m
Talk
An Empirical Comparison of Combinatorial Testing, Random Testing and Adaptive Random TestingIndustry ProgramJournal-First
Journal-First Papers
Huayao Wu Nanjing University, Changhai Nie , Justyna Petke University College London, Yue Jia University College London, Mark Harman Facebook and University College London
12:00
10m
Talk
Assurances in Software Testing: A RoadmapIndustry ProgramNIER
New Ideas and Emerging Results
Marcel Böhme Monash University
Pre-print
12:10
10m
Talk
Automatic Test Improvement with DSpot: a Study with Ten Mature Open-Source ProjectsIndustry ProgramJournal-First
Journal-First Papers
Benjamin Danglot University Lille 1 and INRIA, Oscar Luis Vera Pérez INRIA, Benoit Baudry KTH Royal Institute of Technology, Sweden, Martin Monperrus KTH Royal Institute of Technology
12:20
10m
Talk
Discussion Period
Papers