Blogs (1) >>
ICSE 2019
Sat 25 - Fri 31 May 2019 Montreal, QC, Canada
Thu 30 May 2019 14:50 - 15:10 at St-Paul / Ste-Catherine - Crowdsourced Knowledge and Feedback Chair(s): Xin Xia

Informal documentation contained in resources such as Q&A websites (e.g., Stack Overflow) is a precious resource for developers, who can find there examples on how to use certain libraries, as well as opinions about pros and cons of such libraries. Automatically identifying and classifying such opinions can alleviate developers’ burden in performing manual searches, and can be used to recommend libraries that are good from some points of view (e.g., performance), or highlight those less ideal from other perspectives (e.g., compatibility). We propose POME (Pattern-based Opinion MinEr), an approach that leverages natural language parsing and pattern-matching to classify Stack Overflow sentences referring to libraries according to seven aspects (e.g., performance, usability), and to determine their polarity (positive vs negative). The patterns have been inferred by manually analyzing 4,363 sentences from Stack Overflow linked to a total of 30 libraries. We evaluated POME by (i) comparing the pattern-matching approach with machine learners leveraging the patterns themselves as well as n-grams extracted from Stack Overflow posts; (ii) assessing the ability of POME to detect the polarity of sentences, as compared to sentiment-analysis tools; (iii) comparing POME with the state-of-the-art Stack Overflow opinion mining approach, Opiner, through a study involving 24 human evaluators. Our study shows that POME exhibits a higher precision than a state-of-the-art technique (Opiner), in terms of both opinion aspect identification and polarity assessment.

Thu 30 May
Times are displayed in time zone: Eastern Time (US & Canada) change

14:00 - 15:30: Crowdsourced Knowledge and FeedbackPapers / Journal-First Papers / Technical Track / Software Engineering in Practice at St-Paul / Ste-Catherine
Chair(s): Xin XiaMonash University
14:00 - 14:20
Emerging App Issue Identification from User Feedback: Experience on WeChatSEIPIndustry Program
Software Engineering in Practice
Cuiyun GaoThe Chinese University of Hong Kong, Wujie ZhengTencent, Inc., Yuetang DengTencent, Inc., David LoSingapore Management University, Jichuan Zeng, Michael Lyu, Irwin King
14:20 - 14:30
An Empirical Study of Game Reviews on the Steam PlatformIndustry ProgramJournal-First
Journal-First Papers
Dayi LinQueen's University, Cor-Paul BezemerUniversity of Alberta, Canada, Ying ZouQueen's University, Kingston, Ontario, Ahmed E. HassanQueen's University
14:30 - 14:50
How Reliable is the Crowdsourced Knowledge of Security Implementation?Technical Track
Technical Track
Mengsu ChenVirginia Tech, Felix FischerTechnical University of Munich, Na MengVirginia Tech, Xiaoyin WangUniversity of Texas at San Antonio, USA, Jens GrossklagsTechnical University of Munich
14:50 - 15:10
Pattern-based Mining of Opinions in Q&A WebsitesTechnical Track
Technical Track
Bin LinUniversità della Svizzera italiana (USI), Fiorella ZampettiUniversity of Sannio, Gabriele BavotaUniversità della Svizzera italiana (USI), Massimiliano Di PentaUniversity of Sannio, Michele LanzaUniversita della Svizzera italiana (USI)
15:10 - 15:20
How Do Users Revise Answers on Technical Q&A Websites? A Case Study on Stack OverflowIndustry ProgramJournal-First
Journal-First Papers
Shaowei WangQueen's University, Tse-Hsun (Peter) ChenConcordia University, Ahmed E. HassanQueen's University
15:20 - 15:30
Discussion Period