Publication | Closed Access
There’s no such thing as a ‘true’ model: the challenge of assessing face validity
23
Citations
23
References
2019
Year
Unknown Venue
Bayesian StatisticBayesian Decision TheorySuch ThingEngineeringSocial PsychologyBayesian EconometricsFace ValidationPsychometricsCausal InferenceBayesian InferencePsychologyBiomedical Signal AnalysisComplexitySocial SciencesData ScienceBayesian MethodsPsychological EvaluationStatisticsAffect PerceptionBayesian Hierarchical ModelingBehavioral SciencesApplied Social PsychologyValidity TheoryModel ComparisonExperimental PsychologySocial CognitionBayesian StatisticsPersonality PsychologyRobust ModelingStatistical InferenceFace ValidityEmotionComplexity Penalty
To select among competing generative models of timeseries data, it is necessary to balance the goodness of fit (accuracy) and model complexity. Bayesian methods are a mathematically principled way to achieve this balance. However, when performing simulations – to assess the identifiability of models (face validation) – the best model identified by Bayesian model comparison might appear more complex than the model that actually generated the data. We illustrate this using dynamic causal models of human electrophysiological data, where models with multiple parameter modulations are selected as the best model, even if the true modulations are sparse. We explain this by the form of the complexity penalty, which is equivalent to weighted L2 norm. This phenomenon is an example of implicit prior biases that necessarily entail a complexity penalty.
| Year | Citations | |
|---|---|---|
Page 1
Page 1