The Significance of Parameters’ Optimization in Fair Benchmarking of Software Defects’ Prediction Performances

Main Article Content

Hussam Ghunaim
Julius Dichter

Abstract

Software engineering research in general and software defects’ prediction research in particular are facing serious challenges to their reliability and validity. The major reason is that many of the published research outcomes contradict each other. This phenomenon is mainly caused by the lack of research standards as it exists in many well-established scientific and engineering disciplines. The scope of this paper is to focus on fair benchmarking of the defects’ prediction models. By experimenting three prediction algorithms, we found that the quality of the resultant predictions would significantly fluctuate as parameters’ values changed. Therefore, any published research results not based on optimized prediction algorithms methods can cause inaccurate and misleading benchmarking and recommendations. Thus, we propose optimizing parameters as an essential research standard to conduct reliable and valid benchmarking. We believe if this standard were approved by interested software quality practitioners and research communities, it will present a vital role in soothing the severity of this phenomenon. The three prediction algorithms we used in our analysis were Support Vector Machine SVM, Multilayer Perceptron MLP, and Naïve Bayesian NB. We used KNIME as a data mining platform to design and run all optimization loops on open source Eclipse 2.0 data set.

 

Keywords: Parameters’ optimization, defects prediction, data mining, benchmarking, performance quality, SVM, MLP, NB, KNIME, Eclipse, machine learning

Downloads

Download data is not yet available.

Article Details

Section
Articles