Yıl: 2018 Cilt: 26 Sayı: 3 Sayfa Aralığı: 1343 - 1353 Metin Dili: İngilizce DOI: 10.3906/elk-1707-173 İndeks Tarihi: 23-10-2018

Will it pass? Predicting the outcome of a source code review

Öz:
It has been observed that allowing source code changes to be made only after source code reviews has a positive impact on the quality and lifetime of the resulting software. In some cases, code review processes take quite a long time and this negatively affects software development costs and employee job satisfaction. Establishing mechanisms that predict what kind of feedback reviewers will provide and what revisions they will ask for can reduce the number of times this problem occurs. Thanks to such mechanisms, developers can improve the maturity of their code change requests before the review process starts. In this way, when they start the review, the process may advance quickly and more smoothly. In this study, as a first step towards this goal, we developed a mechanism to identify whether a change proposal would require any revisions or not for approval.
Anahtar Kelime:

Konular: Mühendislik, Elektrik ve Elektronik Bilgisayar Bilimleri, Yazılım Mühendisliği Bilgisayar Bilimleri, Sibernitik Bilgisayar Bilimleri, Bilgi Sistemleri Bilgisayar Bilimleri, Donanım ve Mimari Bilgisayar Bilimleri, Teori ve Metotlar Bilgisayar Bilimleri, Yapay Zeka
Belge Türü: Makale Makale Türü: Araştırma Makalesi Erişim Türü: Erişime Açık
  • Kononenko O, Baysal O, Godfrey MW. Code review quality: How developers see it. In: International Conference on Software Engineering; 14–22 May 2016; Austin, TX, USA. New York, NY, USA: ACM. pp. 1028-1038.
  • Fluri B, Wuersch M, Pinzger M, Gall H. Change distilling: Tree differencing for fine-grained source code change extraction. IEEE T Software Eng 2007; 33: 725-743.
  • Weißgerber P, Neu D, Diehl S. Small patches get in! In: Working Conference on Mining Software Repositories;10–11 May 2008; Leipzig, Germany. New York, NY, USA: ACM. pp. 67-76.
  • Hellendoorn V, Devanbu PT, Bacchelli A. Will they like this? Evaluating code contributions with language models. In: Working Conference on Mining Software Repositories; 16–24 May 2015; Florence, Italy. New York, NY, USA: IEEE. pp. 157-167.
  • Jeong G, Kim S, Zimmermann T, Yi K. Improving Code Review by Predicting Reviewers and Acceptance of Patches. Seoul, Korea: Research on Software Analysis for Error-free Computing Center, 2009.
  • Bosu A, Greiler M, Bird C. Characteristics of useful code reviews: an empirical study at Microsoft. In: Working Conference on Mining Software Repositories; 16–24 May 2015; Florence, Italy. New York, NY, USA: IEEE. pp. 146-156.
  • Zhang T, Song M, Pinedo J, Kim M. Interactive code review for systematic changes. In: International Conference on Software Engineering; 16–24 May 2015; Florence, Italy. New York, NY, USA: IEEE. pp. 111-122.
  • Thongtanunam P, Tantithamthavorn C, Kula RG, Yoshida N, Iida H, Matsumoto K. Who should review my code? A file location-based code-reviewer recommendation approach for modern code review. In: International Conference on Software Analysis, Evolution, and Reengineering; 2–6 March 2017; Montreal, Canada. New York, NY, USA: IEEE. pp. 141-150.
  • Kerzazi N, Asri IE. Who can help to review this piece of code? In: Working Conference on Virtual Enterprises; 3–5 October 2016; Porto, Portugal. Berlin, Germany: Springer. pp. 289-301.
  • Cunha AD, Greathead D. Does personality matter? An analysis of code-review ability. Commun ACM 2007; 50;109-112.
  • Gousios G, Pinzger M, Deursen AV. An exploratory study of the pull-based software development model. In: International Conference on Software Engineering; 31 May–7 June 2014; Hyderabad, India. New York, NY, USA: ACM. pp. 345-355.
  • Land LPW, Tan BCT, Bin L. Investigating training effects on software reviews: a controlled experiment. In: International Symposium on Empirical Software Engineering; 17–18 November 2005; Noosa Heads, Australia. New York, NY, USA: IEEE. pp. 356-366
  • Kitagawa N, Hata H, Ihara A, Kogiso K, Matsumoto K. Code review participation: game theoretical modeling of reviewers in Gerrit datasets. In: Cooperative and Human Aspects of Software Engineering; 16 May 2016; Austin, TX, USA. New York, NY, USA: IEEE. pp. 64-67.
  • Rigby PC, German DM, Cowen L, Storey MA. Peer review on open-source software projects. ACM T Softw Eng Meth 2014; 23; 1-33.
  • Yang X, Kula RG, Yoshida N, Iida H. Mining modern code review repositories: a dataset of people, process and product. In: Working Conference on Mining Software Repositories; 14–22 May 2016; Austin, TX, USA. New York, NY, USA: ACM. pp. 460-463.
  • Mukadam M, Bird C, Rigby PC: Gerrit software code review data from android. In: Working Conference on Mining Software Repositories; 18–19 May 2013; San Francisco, CA, USA. New York, NY, USA: IEEE. pp. 45-48.
  • Miller W, Eugene Myers, W. A file comparison program. Software Pract Exper 1985; 15: 1024-1040.
  • Canfora G, Cerulo L, Penta MD. Ldiff: An enhanced line differencing tool. In: International Conference on Software Engineering; 16–24 May 2009; Vancouver, Canada. New York, NY, USA: IEEE. pp. 595–598.
  • Jiang Y, Adams B, German DM. Will my patch make it? And how fast? Case study on the Linux kernel. In: Working Conference on Mining Software Repositories; 18–19 May 2013; San Francisco, CA, USA. New York, NY, USA: IEEE. pp. 101-110.
  • Bacchelli A, Bird C. Expectations, outcomes, and challenges of modern code review. In: International Conference on Software Engineering; 18–26 May 2013; San Francisco, CA, USA. New York, NY, USA: IEEE. pp. 712-721.
  • Ouni A, Kula RG, Inoue K. Search-based peer reviewers recommendation in modern code review. In: International Conference on Software Maintenance and Evolution; 2–7 October 2016; Raleigh, NC, USA. New York, NY, USA: IEEE. pp. 367-377.
  • Perpich JM, Perry DE, Porter AA, Votta LG, Wade MW. Anywhere, anytime code inspections: using the web to remove inspection bottlenecks in large-scale software development. In: International Conference on Software Engineering; 17–23 May 1997; Boston, MA, USA. New York, NY, USA: ACM. pp. 14-21.
  • Fagan M. Design and code inspections to reduce errors in program development. IBM Syst J 1976; 15: 182-211.
  • Rigby PC, Bird C. Convergent contemporary software peer review practices. In: Foundations of Software Engineering; 18–26 August 2013; St. Petersburg, Russia. New York, NY, USA: ACM. pp. 202-212.
  • Hamasaki K, Kula RG, Yoshida N, Cruz AEC, Fujiwara K, Iida H. Who does what during a code review? Datasets of OSS peer review repositories. In: Working Conference on Mining Software Repositories; 18–19 May 2013; San Francisco, CA, USA. New York, NY, USA: IEEE. pp. 49-52.
  • Bird C, Carnahan T, Greiler M. Lessons learned from building and deploying a code review analytics platform. In: Working Conference on Mining Software Repositories; 16–24 May 2015; Florence, Italy. New York, NY, USA: IEEE. pp. 191-201.
APA GEREDE Ç, MAZAN Z (2018). Will it pass? Predicting the outcome of a source code review. , 1343 - 1353. 10.3906/elk-1707-173
Chicago GEREDE Çağdas Evren,MAZAN Zeki Will it pass? Predicting the outcome of a source code review. (2018): 1343 - 1353. 10.3906/elk-1707-173
MLA GEREDE Çağdas Evren,MAZAN Zeki Will it pass? Predicting the outcome of a source code review. , 2018, ss.1343 - 1353. 10.3906/elk-1707-173
AMA GEREDE Ç,MAZAN Z Will it pass? Predicting the outcome of a source code review. . 2018; 1343 - 1353. 10.3906/elk-1707-173
Vancouver GEREDE Ç,MAZAN Z Will it pass? Predicting the outcome of a source code review. . 2018; 1343 - 1353. 10.3906/elk-1707-173
IEEE GEREDE Ç,MAZAN Z "Will it pass? Predicting the outcome of a source code review." , ss.1343 - 1353, 2018. 10.3906/elk-1707-173
ISNAD GEREDE, Çağdas Evren - MAZAN, Zeki. "Will it pass? Predicting the outcome of a source code review". (2018), 1343-1353. https://doi.org/10.3906/elk-1707-173
APA GEREDE Ç, MAZAN Z (2018). Will it pass? Predicting the outcome of a source code review. Turkish Journal of Electrical Engineering and Computer Sciences, 26(3), 1343 - 1353. 10.3906/elk-1707-173
Chicago GEREDE Çağdas Evren,MAZAN Zeki Will it pass? Predicting the outcome of a source code review. Turkish Journal of Electrical Engineering and Computer Sciences 26, no.3 (2018): 1343 - 1353. 10.3906/elk-1707-173
MLA GEREDE Çağdas Evren,MAZAN Zeki Will it pass? Predicting the outcome of a source code review. Turkish Journal of Electrical Engineering and Computer Sciences, vol.26, no.3, 2018, ss.1343 - 1353. 10.3906/elk-1707-173
AMA GEREDE Ç,MAZAN Z Will it pass? Predicting the outcome of a source code review. Turkish Journal of Electrical Engineering and Computer Sciences. 2018; 26(3): 1343 - 1353. 10.3906/elk-1707-173
Vancouver GEREDE Ç,MAZAN Z Will it pass? Predicting the outcome of a source code review. Turkish Journal of Electrical Engineering and Computer Sciences. 2018; 26(3): 1343 - 1353. 10.3906/elk-1707-173
IEEE GEREDE Ç,MAZAN Z "Will it pass? Predicting the outcome of a source code review." Turkish Journal of Electrical Engineering and Computer Sciences, 26, ss.1343 - 1353, 2018. 10.3906/elk-1707-173
ISNAD GEREDE, Çağdas Evren - MAZAN, Zeki. "Will it pass? Predicting the outcome of a source code review". Turkish Journal of Electrical Engineering and Computer Sciences 26/3 (2018), 1343-1353. https://doi.org/10.3906/elk-1707-173