Concepedia

Publication | Open Access

Boosting Few-Shot Visual Learning With Self-Supervision

64

Citations

46

References

2019

Year

TLDR

Few‑shot learning seeks efficient models for low‑data regimes, while self‑supervised learning extracts supervisory signals from unlabeled data, both aiming to train models with little or no labeled examples. This work proposes to enhance few‑shot learning by incorporating self‑supervision as an auxiliary task. By integrating self‑supervision into the few‑shot pipeline, feature extractors learn richer, transferable visual representations and can leverage diverse unlabeled datasets. The method yields consistent performance gains across multiple architectures, datasets, and self‑supervision techniques. Implementation code is available at https://github.com/valeoai/BF3S.

Abstract

Few-shot learning and self-supervised learning address different facets of the same problem: how to train a model with little or no labeled data. Few-shot learning aims for optimization methods and models that can learn efficiently to recognize patterns in the low data regime. Self-supervised learning focuses instead on unlabeled data and looks into it for the supervisory signal to feed high capacity deep neural networks. In this work we exploit the complementarity of these two domains and propose an approach for improving few-shot learning through self-supervision. We use self-supervision as an auxiliary task in a few-shot learning pipeline, enabling feature extractors to learn richer and more transferable visual representations while still using few annotated samples. Through self-supervision, our approach can be naturally extended towards using diverse unlabeled data from other datasets in the few-shot setting. We report consistent improvements across an array of architectures, datasets and self-supervision techniques. We provide the implementation code at: https://github.com/valeoai/BF3S.

References

YearCitations

Page 1