Publication | Closed Access
Zero-Shot Recognition via Semantic Embeddings and Knowledge Graphs
689
Citations
49
References
2018
Year
Unknown Venue
Few-shot LearningGraph Neural NetworkMachine VisionMachine LearningImage AnalysisData SciencePattern RecognitionCategorizationVisual CategoryEngineeringZero-shot LearningFeature LearningKnowledge Graph EmbeddingsZero-shot RecognitionComputer ScienceDeep LearningVisual ClassifierComputer Vision
Zero‑shot recognition aims to learn a visual classifier for a category with no training images by leveraging its word embedding and relationships to known categories. The authors propose a Graph Convolutional Network‑based method that uses semantic embeddings and categorical relationships to predict classifiers for unseen categories. They construct a knowledge graph, feed node embeddings into a GCN, train on categories with known classifiers, and use the learned filters to infer classifiers for unseen categories. The method is robust to noisy knowledge graphs and outperforms current state‑of‑the‑art by up to 20% on some metrics, a substantial improvement over the 2–3% gains previously reported.
We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 ~ 3% on some metrics to whopping 20% on a few).
| Year | Citations | |
|---|---|---|
Page 1
Page 1