Blogs (1) >>
ICSE 2019
Sat 25 - Fri 31 May 2019 Montreal, QC, Canada
Wed 29 May 2019 11:00 - 11:20 at Duluth - Testing Effectiveness Chair(s): Diomidis Spinellis

Software testing is an integral part of software development process. Unfortunately, for many projects, bugs are prevalent despite testing effort, and testing continues to cost significant amount of time and resources. This brings forward the issue of test case quality and prompts us to investigate what make good test cases. To answer this important question, we interview 21 and survey 261 practitioners, who come from many small to large companies and open source projects distributed in 27 countries, to create and validate 29 hypotheses that describe characteristics of good test cases and testing practices. These characteristics span multiple dimensions including test case contents, size and complexity, coverage, maintainability, and bug detection. We present highly rated characteristics and rationales why practitioners agree or disagree with them, which in turn highlight best practices and trade-offs that need to be considered in the creation of test cases. Our findings also highlight open problems and opportunities for software engineering researchers to improve practitioner activities and address their pain points.

Wed 29 May
Times are displayed in time zone: Eastern Time (US & Canada) change

11:00 - 12:30: Testing EffectivenessPapers / Journal-First Papers / Software Engineering in Practice / New Ideas and Emerging Results at Duluth
Chair(s): Diomidis SpinellisAthens University of Economics and Business
11:00 - 11:20
Practitioners' Views on Good Software Testing PracticesSEIPIndustry Program
Software Engineering in Practice
Pavneet Singh KochharMicrosoft, Xin XiaMonash University, David LoSingapore Management University
11:20 - 11:40
Perception and Practices of Differential TestingSEIPIndustry Program
Software Engineering in Practice
Muhammad Ali GulzarUniversity of California, Los Angeles, Yongkang ZhuGoogle, Xiaofeng HanGoogle
11:40 - 11:50
An interleaving approach to combinatorial testing and failure-inducing interaction identificationIndustry ProgramJournal-First
Journal-First Papers
Xintao Niu, Changhai Nie, Hareton Leung, Yu Lei, Xiaoyin WangUniversity of Texas at San Antonio, USA, Jiaxi XuSchool of Information Engineering, Nanjing Xiaozhuang University, Yan Wang
11:50 - 12:00
An Empirical Comparison of Combinatorial Testing, Random Testing and Adaptive Random TestingIndustry ProgramJournal-First
Journal-First Papers
Huayao WuNanjing University, Changhai Nie, Justyna PetkeUniversity College London, Yue JiaUniversity College London, Mark HarmanFacebook and University College London
12:00 - 12:10
Assurances in Software Testing: A RoadmapIndustry ProgramNIER
New Ideas and Emerging Results
Marcel BöhmeMonash University
12:10 - 12:20
Automatic Test Improvement with DSpot: a Study with Ten Mature Open-Source ProjectsIndustry ProgramJournal-First
Journal-First Papers
Benjamin DanglotUniversity Lille 1 and INRIA, Oscar Luis Vera PérezINRIA, Benoit BaudryKTH Royal Institute of Technology, Sweden, Martin MonperrusKTH Royal Institute of Technology
12:20 - 12:30
Discussion Period