Publication | Closed Access
Unbiased look at dataset bias
2.4K
Citations
19
References
2011
Year
Unknown Venue
Artificial IntelligenceRelative Data BiasEngineeringMachine LearningObject CategorizationImage AnalysisData SciencePattern RecognitionBiasStatisticsVision RecognitionMachine VisionBenchmark DatasetsAlgorithmic BiasObject DetectionComputer ScienceDeep LearningBias DetectionLatest Benchmark NumbersComputer VisionDataset BiasObject RecognitionObject Recognition ResearchStatistical Inference
Datasets have driven progress in object recognition but have also narrowed research focus to benchmark performance, creating closed‑world communities and potentially diverting attention from the field’s original goals. The paper aims to assess the current state of recognition datasets. The authors conduct a comparative study of popular datasets, evaluating them on data bias, cross‑dataset generalization, closed‑world effects, and sample value. Results reveal surprising insights that point to improvements in dataset collection and evaluation protocols, and the authors hope to spark broader community discussion on this neglected issue.
Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (e.g. the Corel world, the Caltech-101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose? The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.
| Year | Citations | |
|---|---|---|
Page 1
Page 1