Publication | Open Access
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
797
Citations
28
References
2017
Year
Artificial IntelligenceFew-shot LearningFast AdaptationMachine VisionMachine LearningData ScienceMeta-learningGradient DescentEngineeringZero-shot LearningMachine Learning ModelMeta-learning (Computer Science)Gradient StepsComputer ScienceTransfer LearningRobot LearningDeep LearningComputer Vision
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.
| Year | Citations | |
|---|---|---|
2024 | 15.6K | |
2016 | 9.7K | |
2017 | 5.8K | |
2012 | 4.3K | |
2013 | 3.6K | |
2015 | 3.2K | |
2017 | 2.4K | |
2016 | 1.3K | |
2016 | 966 | |
2016 | 942 |
Page 1
Page 1