Blogs (1) >>
ICSE 2019
Sat 25 - Fri 31 May 2019 Montreal, QC, Canada
Thu 30 May 2019 14:30 - 14:50 at St-Paul / Ste-Catherine - Crowdsourced Knowledge and Feedback Chair(s): Xin Xia

Stack Overflow (SO) is the most popular online Q&A site for developers to share their programming expertise. Given multiple answers to certain questions, developers may take the accepted answer, the answer from a person with high reputation, or the one frequently suggested. However, researchers observed exploitable security vulnerabilities in popularly suggested code, which was widely reused by high-profile applications. This observation inspires us to explore the following questions: How much can we trust the security implementation suggestions on SO? If suggested answers are vulnerable, can developers trust the community’s dynamics to infer the vulnerability and identify a secure counterpart?

To answer these questions, we conducted a study on security-related SO posts by contrasting secure and insecure advice with the community-given content evaluation. We compiled 953 different groups of similar code examples and labeled their security, identifying 781 secure answer posts and 648 insecure ones. Compared with secure suggestions, insecure ones had higher view counts (36,280 vs. 18,810), received a higher score 14 vs. 5), and had significantly more duplicates (4 vs. 3) on average. 38% of the posts provided by highly reputed so-called trusted users were insecure.

Our findings show that based on the distribution of secure and insecure code on SO, users being laymen in security rely on additional advice and guidance. However, the community-given feedback does not allow differentiating secure from insecure choices. Moreover, the reputation mechanism fails in indicating trustworthy users with respect to security questions, ultimately leaving other users wandering around in a software security minefield.

Thu 30 May

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Crowdsourced Knowledge and FeedbackJournal-First Papers / Technical Track / Software Engineering in Practice / Papers at St-Paul / Ste-Catherine
Chair(s): Xin Xia Monash University
14:00
20m
Talk
Emerging App Issue Identification from User Feedback: Experience on WeChatSEIPIndustry Program
Software Engineering in Practice
Cuiyun Gao The Chinese University of Hong Kong, Wujie Zheng Tencent, Inc., Yuetang Deng Tencent, Inc., David Lo Singapore Management University, Jichuan Zeng , Michael Lyu , Irwin King
14:20
10m
Talk
An Empirical Study of Game Reviews on the Steam PlatformIndustry ProgramJournal-First
Journal-First Papers
Dayi Lin Queen's University, Cor-Paul Bezemer University of Alberta, Canada, Ying Zou Queen's University, Kingston, Ontario, Ahmed E. Hassan Queen's University
14:30
20m
Talk
How Reliable is the Crowdsourced Knowledge of Security Implementation?Technical Track
Technical Track
Mengsu Chen Virginia Tech, Felix Fischer Technical University of Munich, Na Meng Virginia Tech, Xiaoyin Wang University of Texas at San Antonio, USA, Jens Grossklags Technical University of Munich
14:50
20m
Talk
Pattern-based Mining of Opinions in Q&A WebsitesTechnical Track
Technical Track
Bin Lin Università della Svizzera italiana (USI), Fiorella Zampetti University of Sannio, Gabriele Bavota Università della Svizzera italiana (USI), Massimiliano Di Penta University of Sannio, Michele Lanza Universita della Svizzera italiana (USI)
15:10
10m
Talk
How Do Users Revise Answers on Technical Q&A Websites? A Case Study on Stack OverflowIndustry ProgramJournal-First
Journal-First Papers
Shaowei Wang Queen's University, Tse-Hsun (Peter) Chen Concordia University, Ahmed E. Hassan Queen's University
15:20
10m
Talk
Discussion Period
Papers