Publication | Closed Access
Regression models for prognostic prediction: advantages, problems, and suggested solutions.
566
Citations
0
References
1985
Year
EngineeringPrognosisDiagnosisRegression ModelsPotential PredictorsBiostatisticsStatisticsMedical StatisticPrediction ModellingMultiple Regression ModelsHealth PolicyDisease Risk AssessmentPredictive AnalyticsPredictive ModelingRiskOutcomes ResearchClinical Decision SupportForecastingEpidemiologyPrognostic EvaluationPatient SafetyMedicinePrognostics
Regression models are widely used for patient outcome prediction, yet many studies apply them without validating assumptions or guarding against overfitting, resulting in models that often fail external validation. The study aims to demonstrate that data‑reduction techniques can enhance regression model performance when the endpoint‑to‑predictor ratio is low. The authors employ data‑reduction methods applicable when the number of events per predictor is below ten to mitigate overfitting. When assumptions are checked and violated assumptions are addressed, regression models outperform stratification and recursive partitioning and avoid overfitting.
Multiple regression models have wide applicability in predicting the outcome of patients with a variety of diseases. However, many researchers are using such models without validating the necessary assumptions. All too frequently, researchers also "overfit" the data by developing models using too many predictor variables and insufficient sample sizes. Models developed in this way are unlikely to stand the test of validation on a separate patient sample. Without attempting such a validation, the researcher remains unaware that overfitting has occurred. When the ratio of the number of patients suffering endpoints to the number of potential predictors is small (say less than 10), data reduction methods are available that can greatly improve the performance of regression models. Regression models can make more accurate predictions than other methods such as stratification and recursive partitioning, when model assumptions are thoroughly examined; steps are taken (ie, choosing another model or transforming the data) when assumptions are violated; and the method of model formulation does not result in overfitting the data.