eprintid: 11977 rev_number: 2 eprint_status: archive userid: 1 dir: disk0/00/01/19/77 datestamp: 2023-11-10 03:26:31 lastmod: 2023-11-10 03:26:31 status_changed: 2023-11-10 01:16:35 type: article metadata_visibility: show creators_name: Balogun, A.O. creators_name: Basri, S. creators_name: Abdulkadir, S.J. creators_name: Adeyemo, V.E. creators_name: Imam, A.A. creators_name: Bajeh, A.O. title: Software defect prediction: Analysis of class imbalance and performance stability ispublished: pub note: cited By 29 abstract: The performance of prediction models in software defect prediction depends on the quality of datasets used for training such models. Class imbalance is one of data quality problems that affect prediction models. This has drawn the attention of researchers and many approaches have been developed to address this issue. In this study, an extensive empirical study is presented, which evaluates the performance stability of prediction models in SDP. Ten software defect datasets from NASA and PROMISE repositories with varying imbalance ratio (IR) values were used as the original datasets. New datasets are generated from the original datasets using undersampling (Random under Sampling: RUS) and oversampling (Synthetic Minority Oversampling Technique: SMOTE) methods with different IR values. The sampling techniques were based on the equal proportion (100) of the increment (SMOTE) of minority class label or decrement (RUS) of the majority class label until each dataset is balanced. IR is the ratio of the defective instances to non-defective instances in a dataset. Each newly generated datasets with different IR values based on different sampling techniques were randomized before applying prediction models. Nine standard prediction models were used on the newly generated datasets. The performance of the prediction models was measured using the Area Under Curve (AUC) and Co-efficient of Variation (CV) is used to determine the performance stability. Firstly, experimental results showed that class imbalance had a negative effect on the performance of prediction models and the oversampling method (SMOTE) enhanced the performances of prediction models. Secondly, Oversampling method of balancing datasets is better than using Undersampling methods as the latter had poor performance as a result of the random deletion of useful instances in the datasets. Finally, among the prediction models used in this study, it appeared that Logistic Regression (LR) (RUS: 30.05; SMOTE: 33.51), Naïve Bayes (NB) (RUS: 34.18; SMOTE: 33.05), and Random Forest (RF) (RUS: 29.24; SMOTE: 64.25) with their respective CV values are more stable prediction models and they work well with imbalanced datasets. © School of Engineering, Taylor�s University. date: 2019 publisher: Taylor's University official_url: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85077395445&partnerID=40&md5=144bb5c249bac26f803e1d85105d23dc full_text_status: none publication: Journal of Engineering Science and Technology volume: 14 number: 6 pagerange: 3294-3308 refereed: TRUE issn: 18234690 citation: Balogun, A.O. and Basri, S. and Abdulkadir, S.J. and Adeyemo, V.E. and Imam, A.A. and Bajeh, A.O. (2019) Software defect prediction: Analysis of class imbalance and performance stability. Journal of Engineering Science and Technology, 14 (6). pp. 3294-3308. ISSN 18234690