Concepedia

TLDR

Model selection literature has inadequately reflected the deep foundations of AIC and its comparison to BIC, despite AIC’s clear information‑theoretic philosophy and rigorous statistical basis. The study argues that the choice between AIC and BIC should be guided by the philosophical assumptions about reality, model approximation, and inference intent, rather than a Bayes versus frequentist debate. AIC is justified as a Bayesian criterion with a sample‑size‑dependent prior, BIC is derived non‑Bayesianly, and the paper presents multimodel inference techniques, especially model‑averaging methods.

Abstract

The model selection literature has been generally poor at reflecting the deep foundations of the Akaike information criterion (AIC) and at making appropriate comparisons to the Bayesian information criterion (BIC). There is a clear philosophy, a sound criterion based in information theory, and a rigorous statistical foundation for AIC. AIC can be justified as Bayesian using a “savvy” prior on models that is a function of sample size and the number of model parameters. Furthermore, BIC can be derived as a non-Bayesian result. Therefore, arguments about using AIC versus BIC for model selection cannot be from a Bayes versus frequentist perspective. The philosophical context of what is assumed about reality, approximating models, and the intent of model-based inference should determine whether AIC or BIC is used. Various facets of such multimodel inference are presented here, particularly methods of model averaging.

References

YearCitations

1974

49.6K

2003

42.1K

1951

19.5K

2002

12.5K

1995

11.8K

1974

10.2K

1995

6.8K

1989

6.3K

1999

4.1K

1973

2.4K

Page 1