Publication | Closed Access
A Cautionary Note on the Robustness of Latent Class Models for Estimating Diagnostic Error without a Gold Standard
205
Citations
13
References
2004
Year
EngineeringDiagnosisDiagnostic ErrorGold StandardLatent ModelingConditional DependenceRobust StatisticEstimating Diagnostic ErrorModel MisspecificationBiostatisticsLatent Class ModelsDisease DiagnosisStatisticsMedical StatisticDiagnostic CriterionLatent Variable ModelMarginal Structural ModelsEpidemiologyModel ReliabilityTime-varying ConfoundingStatistical InferenceMedicine
Modeling diagnostic error without a gold standard relies on latent class models that traditionally assume conditional independence of test results, but recent work has explored dependence structures. This note highlights a problem with modeling dependence in latent class analyses and offers practical guidelines for practitioners. We demonstrate that misspecifying conditional dependence biases sensitivity, specificity, and prevalence estimates, and that with few tests likelihood comparisons may fail to detect such misspecification, as shown by asymptotic theory, simulations, and data analysis.
Modeling diagnostic error without a gold standard has been an active area of biostatistical research. In a majority of the approaches, model-based estimates of sensitivity, specificity, and prevalence are derived from a latent class model in which the latent variable represents an individual's true unobserved disease status. For simplicity, initial approaches assumed that the diagnostic test results on the same subject were independent given the true disease status (i.e., the conditional independence assumption). More recently, various authors have proposed approaches for modeling the dependence structure between test results given true disease status. This note discusses a potential problem with these approaches. Namely, we show that when the conditional dependence between tests is misspecified, estimators of sensitivity, specificity, and prevalence can be biased. Importantly, we demonstrate that with small numbers of tests, likelihood comparisons and other model diagnostics may not be able to distinguish between models with different dependence structures. We present asymptotic results that show the generality of the problem. Further, data analysis and simulations demonstrate the practical implications of model misspecification. Finally, we present some guidelines about the use of these models for practitioners.
| Year | Citations | |
|---|---|---|
1980 | 774 | |
2001 | 515 | |
1996 | 440 | |
1985 | 308 | |
2001 | 291 | |
1999 | 230 | |
1997 | 158 | |
1997 | 147 | |
1989 | 129 | |
2002 | 96 |
Page 1
Page 1