Meta Learning
Meta-learning, or "learning to learn," focuses on developing algorithms that can efficiently adapt to new tasks with limited data by leveraging prior experience from related tasks. Current research emphasizes improving the robustness and efficiency of meta-learning algorithms, particularly in low-resource settings, often employing model-agnostic meta-learning (MAML) and its variants, along with techniques like dynamic head networks and reinforcement learning for task selection. This field is significant because it addresses the limitations of traditional machine learning in data-scarce scenarios, with applications ranging from speech and image recognition to robotics and personalized medicine.
Papers
Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn
Ondrej Bohdal, Yinbing Tian, Yongshuo Zong, Ruchika Chavhan, Da Li, Henry Gouk, Li Guo, Timothy Hospedales
Meta-Optimization for Higher Model Generalizability in Single-Image Depth Prediction
Cho-Ying Wu, Yiqi Zhong, Junying Wang, Ulrich Neumann