Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, Chelsea Finn, Pieter Abbeel, Sergey Levine, 2017Proceedings of the 34th International Conference on Machine Learning, Vol. 70 (PMLR)DOI: 10.48550/arXiv.1703.03400 - Presents a general algorithm for meta-learning by learning a good initial set of parameters that can quickly adapt to new tasks with only a few gradient steps. It articulates the inner and outer loop optimization.
Prototypical Networks for Few-Shot Learning, Jake Snell, Kevin Swersky, Richard Zemel, 2017Advances in Neural Information Processing Systems, Vol. 30 (Neural Information Processing Systems Foundation, Inc. (NeurIPS))DOI: 10.48550/arXiv.1703.05175 - Introduces a metric-based meta-learning approach where classification is performed by computing distances to prototype representations of each class, derived from support set examples. This illustrates the use of support and query sets.
Learning to learn by gradient descent by gradient descent, Marcin Andrychowicz, Misha Denil, Sergio Gómez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas, 2016Advances in Neural Information Processing Systems, Vol. 29 (NeurIPS)DOI: 10.48550/arXiv.1606.04474 - Proposes learning an optimizer (meta-learner) using a neural network, demonstrating the 'learning to learn' concept in a deep learning context. It frames the meta-learning problem in terms of optimizing an optimization process.