Concepedia

TLDR

Defect prediction models help software quality assurance teams allocate limited resources to the most defect‑prone modules, yet little is known about the accuracy of model validation estimates. This study investigates the bias and variance of model validation techniques in defect prediction. The authors evaluate 12 commonly used validation techniques, selected from 256 studies, and assess their performance using k‑fold cross‑validation and other historical‑data methods. Analysis of 101 public datasets shows that 77 % are highly susceptible to unstable results, and a case study of 18 systems reveals that single‑repetition holdout validation produces 46–229 % more bias and 53–863 % more variance than top techniques, while out‑of‑sample bootstrap offers the best bias‑variance balance, leading the authors to recommend avoiding holdout and adopting bootstrap.

Abstract

Defect prediction models help software quality assurance teams to allocate their limited resources to the most defect-prone modules. Model validation techniques, such as <inline-formula><tex-math notation="LaTeX">$k$</tex-math> </inline-formula> -fold cross-validation, use historical data to estimate how well a model will perform in the future. However, little is known about how accurate the estimates of model validation techniques tend to be. In this paper, we investigate the bias and variance of model validation techniques in the domain of defect prediction. Analysis of 101 public defect datasets suggests that 77 percent of them are highly susceptible to producing unstable results– - selecting an appropriate model validation technique is a critical experimental design choice. Based on an analysis of 256 studies in the defect prediction literature, we select the 12 most commonly adopted model validation techniques for evaluation. Through a case study of 18 systems, we find that single-repetition holdout validation tends to produce estimates with 46-229 percent more bias and 53-863 percent more variance than the top-ranked model validation techniques. On the other hand, out-of-sample bootstrap validation yields the best balance between the bias and variance of estimates in the context of our study. Therefore, we recommend that future defect prediction studies avoid single-repetition holdout validation, and instead, use out-of-sample bootstrap validation.

References

YearCitations

Page 1