Concepedia

Publication | Open Access

CIFAR10 to Compare Visual Recognition Performance between Deep Neural\n Networks and Humans

25

Citations

0

References

2018

Year

Abstract

Visual object recognition plays an essential role in human daily life. This\nability is so efficient that we can recognize a face or an object seemingly\nwithout effort, though they may vary in position, scale, pose, and\nillumination. In the field of computer vision, a large number of studies have\nbeen carried out to build a human-like object recognition system. Recently,\ndeep neural networks have shown impressive progress in object classification\nperformance, and have been reported to surpass humans. Yet there is still lack\nof thorough and fair comparison between humans and artificial recognition\nsystems. While some studies consider artificially degraded images, human\nrecognition performance on dataset widely used for deep neural networks has not\nbeen fully evaluated. The present paper carries out an extensive experiment to\nevaluate human classification accuracy on CIFAR10, a well-known dataset of\nnatural images. This then allows for a fair comparison with the\nstate-of-the-art deep neural networks. Our CIFAR10-based evaluations show very\nefficient object recognition of recent CNNs but, at the same time, prove that\nthey are still far from human-level capability of generalization. Moreover, a\ndetailed investigation using multiple levels of difficulty reveals that easy\nimages for humans may not be easy for deep neural networks. Such images form a\nsubset of CIFAR10 that can be employed to evaluate and improve future neural\nnetworks.\n