Blogs (1) >>
ICSE 2019
Sat 25 - Fri 31 May 2019 Montreal, QC, Canada
Thu 30 May 2019 14:30 - 14:50 at St-Paul / Ste-Catherine - Crowdsourced Knowledge and Feedback Chair(s): Xin Xia

Stack Overflow (SO) is the most popular online Q&A site for developers to share their programming expertise. Given multiple answers to certain questions, developers may take the accepted answer, the answer from a person with high reputation, or the one frequently suggested. However, researchers observed exploitable security vulnerabilities in popularly suggested code, which was widely reused by high-profile applications. This observation inspires us to explore the following questions: How much can we trust the security implementation suggestions on SO? If suggested answers are vulnerable, can developers trust the community’s dynamics to infer the vulnerability and identify a secure counterpart?

To answer these questions, we conducted a study on security-related SO posts by contrasting secure and insecure advice with the community-given content evaluation. We compiled 953 different groups of similar code examples and labeled their security, identifying 781 secure answer posts and 648 insecure ones. Compared with secure suggestions, insecure ones had higher view counts (36,280 vs. 18,810), received a higher score 14 vs. 5), and had significantly more duplicates (4 vs. 3) on average. 38% of the posts provided by highly reputed so-called trusted users were insecure.

Our findings show that based on the distribution of secure and insecure code on SO, users being laymen in security rely on additional advice and guidance. However, the community-given feedback does not allow differentiating secure from insecure choices. Moreover, the reputation mechanism fails in indicating trustworthy users with respect to security questions, ultimately leaving other users wandering around in a software security minefield.

Thu 30 May
Times are displayed in time zone: Eastern Time (US & Canada) change

14:00 - 15:30: Crowdsourced Knowledge and FeedbackPapers / Journal-First Papers / Technical Track / Software Engineering in Practice at St-Paul / Ste-Catherine
Chair(s): Xin XiaMonash University
14:00 - 14:20
Emerging App Issue Identification from User Feedback: Experience on WeChatSEIPIndustry Program
Software Engineering in Practice
Cuiyun GaoThe Chinese University of Hong Kong, Wujie ZhengTencent, Inc., Yuetang DengTencent, Inc., David LoSingapore Management University, Jichuan Zeng, Michael Lyu, Irwin King
14:20 - 14:30
An Empirical Study of Game Reviews on the Steam PlatformIndustry ProgramJournal-First
Journal-First Papers
Dayi LinQueen's University, Cor-Paul BezemerUniversity of Alberta, Canada, Ying ZouQueen's University, Kingston, Ontario, Ahmed E. HassanQueen's University
14:30 - 14:50
How Reliable is the Crowdsourced Knowledge of Security Implementation?Technical Track
Technical Track
Mengsu ChenVirginia Tech, Felix FischerTechnical University of Munich, Na MengVirginia Tech, Xiaoyin WangUniversity of Texas at San Antonio, USA, Jens GrossklagsTechnical University of Munich
14:50 - 15:10
Pattern-based Mining of Opinions in Q&A WebsitesTechnical Track
Technical Track
Bin LinUniversità della Svizzera italiana (USI), Fiorella ZampettiUniversity of Sannio, Gabriele BavotaUniversità della Svizzera italiana (USI), Massimiliano Di PentaUniversity of Sannio, Michele LanzaUniversita della Svizzera italiana (USI)
15:10 - 15:20
How Do Users Revise Answers on Technical Q&A Websites? A Case Study on Stack OverflowIndustry ProgramJournal-First
Journal-First Papers
Shaowei WangQueen's University, Tse-Hsun (Peter) ChenConcordia University, Ahmed E. HassanQueen's University
15:20 - 15:30
Discussion Period