Publication | Closed Access
Objective Testing Procedures in Linear Models: Calibration of the<i>p</i>‐values
47
Citations
27
References
2006
Year
Bayesian StatisticBayesian Decision TheoryObjective Testing ProceduresGeneralizability TheoryEducationNormal Linear ModelsClassical Test TheoryBayesian InferenceStochastic SimulationCalibrationBayesian ModelingBiostatisticsBayesian MethodsPublic HealthStatisticsBayesian Hierarchical ModelingTest DevelopmentRegression TestingBayesian StatisticsMeasurement ModelsPosterior ProbabilitiesStatistical EvidenceTesting HypothesisStatistical Inference
Abstract. An optimal Bayesian decision procedure for testing hypothesis in normal linear models based on intrinsic model posterior probabilities is considered. It is proven that these posterior probabilities are simple functions of the classical F ‐statistic, thus the evaluation of the procedure can be carried out analytically through the frequentist analysis of the posterior probability of the null. An asymptotic analysis proves that, under mild conditions on the design matrix, the procedure is consistent. For any testing hypothesis it is also seen that there is a one‐to‐one mapping – which we call calibration curve – between the posterior probability of the null hypothesis and the classical bi p ‐value. This curve adds substantial knowledge about the possible discrepancies between the Bayesian and the p ‐value measures of evidence for testing hypothesis. It permits a better understanding of the serious difficulties that are encountered in linear models for interpreting the p ‐values. A specific illustration of the variable selection problem is given.
| Year | Citations | |
|---|---|---|
Page 1
Page 1