Publication | Open Access
Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics
1.3K
Citations
67
References
2013
Year
EngineeringEvaluation MetricsImage DescriptionCommunicationImage SearchRanking TaskCorpus LinguisticsNatural Language ProcessingMultimodal LlmImage AnalysisInformation RetrievalText-to-image RetrievalVisual GroundingComputational LinguisticsVisual Question AnsweringLanguage StudiesContent AnalysisMachine TranslationAutomatic Evaluation MetricsVision Language ModelDeep LearningComputer VisionImage UnderstandingScene InterpretationLinguistics
Associating images with natural language descriptions is a key aspect of image understanding and underpins sentence‑based image search. The authors propose to treat sentence‑based image annotation as a ranking problem over a pool of captions. They introduce an 8,000‑image benchmark with five captions each, and conduct a detailed comparison of human and automatic evaluation metrics, proposing scalable methods for collecting large‑scale human judgments. Their experiments show that minimally supervised systems perform well, that training on multiple captions and exploiting syntactic and semantic features improves results, that ranking‑aware metrics are more robust, and that the evaluation of ranking‑based image description can be fully automated.
The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.
| Year | Citations | |
|---|---|---|
Page 1
Page 1