Publication | Closed Access
Large-Scale Long-Tailed Recognition in an Open World
1.1K
Citations
46
References
2019
Year
Unknown Venue
Artificial IntelligenceFew-shot LearningMultiple Instance LearningEngineeringMachine LearningImage ClassificationImage AnalysisZero-shot LearningData SciencePattern RecognitionLong-tail LearningReal World DataMachine VisionFeature LearningKnowledge DiscoveryOpen WorldComputer ScienceDeep LearningIntegrated Oltr AlgorithmComputer VisionPractical Recognition SystemObject Recognition
Real‑world data exhibit long‑tailed, open‑ended distributions, requiring recognition systems to handle majority/minority classes, few‑shot learning, and novelty detection. The study introduces Open Long‑Tailed Recognition (OLTR), aiming to learn from naturally distributed data and maximize balanced‑test accuracy across head, tail, and open classes. The authors propose an integrated OLTR algorithm that maps images into a metric‑aware feature space using a dynamic meta‑embedding of direct and memory features, where the feature norm signals familiarity with known classes. On three large‑scale OLTR datasets curated from ImageNet, Places, and MS1M, the method consistently outperforms state‑of‑the‑art.
Real world data often have a long-tailed and open-ended distribution. A practical recognition system must classify among majority and minority classes, generalize from a few known instances, and acknowledge novelty upon a never seen instance. We define Open Long-Tailed Recognition (OLTR) as learning from such naturally distributed data and optimizing the classification accuracy over a balanced test set which include head, tail, and open classes. OLTR must handle imbalanced classification, few-shot learning, and open-set recognition in one integrated algorithm, whereas existing classification approaches focus only on one aspect and deliver poorly over the entire class spectrum. The key challenges are how to share visual knowledge between head and tail classes and how to reduce confusion between tail and open classes. We develop an integrated OLTR algorithm that maps an image to a feature space such that visual concepts can easily relate to each other based on a learned metric that respects the closed-world classification while acknowledging the novelty of the open world. Our so-called dynamic meta-embedding combines a direct image feature and an associated memory feature, with the feature norm indicating the familiarity to known classes. On three large-scale OLTR datasets we curate from object-centric ImageNet, scene-centric Places, and face-centric MS1M data, our method consistently outperforms the state-of-the-art. Our code, datasets, and models enable future OLTR research and are publicly available at https://liuziwei7.github.io/projects/LongTail.html.
| Year | Citations | |
|---|---|---|
Page 1
Page 1