Meta Learning
Meta-learning, or "learning to learn," focuses on developing algorithms that can efficiently adapt to new tasks with limited data by leveraging prior experience from related tasks. Current research emphasizes improving the robustness and efficiency of meta-learning algorithms, particularly in low-resource settings, often employing model-agnostic meta-learning (MAML) and its variants, along with techniques like dynamic head networks and reinforcement learning for task selection. This field is significant because it addresses the limitations of traditional machine learning in data-scarce scenarios, with applications ranging from speech and image recognition to robotics and personalized medicine.
Papers
Memory-Based Meta-Learning on Non-Stationary Distributions
Tim Genewein, Grégoire Delétang, Anian Ruoss, Li Kevin Wenliang, Elliot Catt, Vincent Dutordoir, Jordi Grau-Moya, Laurent Orseau, Marcus Hutter, Joel Veness
APAM: Adaptive Pre-training and Adaptive Meta Learning in Language Model for Noisy Labels and Long-tailed Learning
Sunyi Chi, Bo Dong, Yiming Xu, Zhenyu Shi, Zheng Du