Concepedia

Publication | Open Access

Revisiting One-vs-All Classifiers for Predictive Uncertainty and\n Out-of-Distribution Detection in Neural Networks

20

Citations

0

References

2020

Year

Abstract

Accurate estimation of predictive uncertainty in modern neural networks is\ncritical to achieve well calibrated predictions and detect out-of-distribution\n(OOD) inputs. The most promising approaches have been predominantly focused on\nimproving model uncertainty (e.g. deep ensembles and Bayesian neural networks)\nand post-processing techniques for OOD detection (e.g. ODIN and Mahalanobis\ndistance). However, there has been relatively little investigation into how the\nparametrization of the probabilities in discriminative classifiers affects the\nuncertainty estimates, and the dominant method, softmax cross-entropy, results\nin misleadingly high confidences on OOD data and under covariate shift. We\ninvestigate alternative ways of formulating probabilities using (1) a\none-vs-all formulation to capture the notion of "none of the above", and (2) a\ndistance-based logit representation to encode uncertainty as a function of\ndistance to the training manifold. We show that one-vs-all formulations can\nimprove calibration on image classification tasks, while matching the\npredictive performance of softmax without incurring any additional training or\ntest-time complexity.\n