In an article for Wired Magazine in 2006, Jeff Howe defined crowdsourcing as an idea for outsourcing a task that is traditionally performed by a single employee to a large group of people in the form of an open call. Since then, by modifying crowdsourcing into different forms, some of the most successful new companies on the market have used this idea to make people’s lives easier and better. On the other hand, software testing has long been recognized as a time-consuming and expensive activity. Mobile application testing is especially difficult, largely due to compatibility issues: a mobile application must work on devices with different operating systems (e.g. i OS, Android), manufacturers (e.g. Huawei, Samsung) and keypad types (e.g. virtual keypad, hard keypad). One cannot be 100% sure that, just because a tested application works well on one device, it will run smoothly on all others.
Crowdsourced testing is an emerging paradigm that can improve the cost-effectiveness of softwaretesting and accelerate the process, especially for mobile applications. It entrusts testing tasks to online crowdworkers whose diverse testing devices/contexts, experience, and skill sets can significantly contribute to more reliable, cost-effective and efficient testing results. It has already been adopted by many software organizations, including Google, Facebook, Amazon and Microsoft.
This book provides an intelligent overview of crowdsourced testing research and practice. It employs machine learning, data mining, and deep learning techniques to process the data generated during the crowdsourced testing process, to facilitate the management of crowdsourced testing, and to improve the quality of crowdsourced testing.
Inhaltsverzeichnis
Part I Preliminary of Crowdsourced Testing.- 1 Introduction.- 2 Preliminaries.- 3 Book Structure.- Part II Supporting Technology for Crowdsourced Testing Workers.- 4 Characterization of Crowd Worker.- 5 Task Recommendation for Crowd Worker.- Part III Supporting Technology for Crowdsourced Testing Tasks.- 6 Crowd Worker Recommendation for Testing Task.- 7 Crowdsourced Testing Task Management.- Part IV Supporting Technology for Crowdsourced Testing Results.- 8 Classification of Crowdsourced Testing Reports.- 9 Duplicate Detection of Crowdsourced Testing Reports.- 10 Prioritization of Crowdsourced Testing Reports.- 11 Summarization of Crowdsourced Testing Reports.- 12 Quality Assessment of Crowdsourced Testing Cases.- Part V Conclusions and Future Perspectives.- 13 Conclusions.- 14 Perspectives.
Über den Autor
Qing Wang is a researcher at the Institute of Software Chinese Academy of Sciences (ISCAS). She is also the deputy chief engineer of ISCAS and director of Laboratory for Internet Software Technologies of ISCAS. She currently serves as a director of the Board of Directors of the International Software and Systems Processes Association (ISSPA), the member of the International Software Engineering Research Network (ISERN), a member of the editorial board of Information and Software Technology Journal (IST) and Journal of Software Evolution and Process (JSEP), and the CMMI lead appraisal. She has served as the general chair of ESEM in 2015, the program chair of ICSP from 2007 to 2009. Her research lies in the area of software process, software quality assurance, requirement engineering, knowledge engineering, big data, and artificial intelligence for software engineering. She has 20 years of experience in software process and quality assurance technologies. Her recent research related to software process and quality management has won the second prize of National Progress in Science and Technology of China and second prize of Progress in Science and Technology of Beijing. She has edited/co-edited 5 books and published more than 100 papers in international high-level conferences and journals.
Zhenyu Chen is the founder of Mooctest (mooctest.net), and he is currently a Professor at the Software Institute, Nanjing University. He received his bachelor and Ph.D. in Mathematics from Nanjing University. He worked as a Postdoctoral Researcher at the School of Computer Science and Engineering, Southeast University, China. His research interests focus on software analysis and testing. He has more than 100 publications in journals and proceedings, including TOSEM, TSE, JSS, SQJ, IJSEKE, ISSTA, ICST, QSIC, etc. He has served as the associated editor for IEEE Transactions on Reliability, PC co-chair of QRS 2016, QSIC 2013, AST2013, IWPD2012, and the programcommittee member of many international conferences. He also founded the NJSD (Nanjing Global Software Development Conference). He has won research funding from several competitive sources such as NSFC. He owns more than 40 patents (22 granted), and some of his patents have been transferred into well-known software companies such as Baidu, Alibaba and Huawei.
Junjie Wang is an associate researcher at the Institute of Software, Chinese Academy of Sciences (ISCAS). She received the Ph D degree from ISCAS in 2015. She was a visiting scholar at North Carolina State University from Sep.2017 to Sep.2018 and worked with Prof. Tim Menzies. Her research interests include crowdsourced testing, mining software repositories, and intelligent software engineering. She has more than 20 high-quality publications and has received the ACM SIGSOFT Distinguished Paper Award at ICSE in 2019 and 2020 respectively, as well as IEEE Best Paper Award at QRS in 2019.
Yang Feng received bachelor’s and master’s degrees in software engineering from Nanjing University in 2011 and 2013, respectively. He obtained the Ph.D. at the University of California, Irvine. He has published more than 30 referred papers and regularly serves PC member and reviewer for international conferences and journals. His current research interests lie in software testing, crowdsourced software engineering, and program analysis.