Meta Learning
Meta-learning, or "learning to learn," focuses on developing algorithms that can efficiently adapt to new tasks with limited data by leveraging prior experience from related tasks. Current research emphasizes improving the robustness and efficiency of meta-learning algorithms, particularly in low-resource settings, often employing model-agnostic meta-learning (MAML) and its variants, along with techniques like dynamic head networks and reinforcement learning for task selection. This field is significant because it addresses the limitations of traditional machine learning in data-scarce scenarios, with applications ranging from speech and image recognition to robotics and personalized medicine.
Papers
Learning to Continually Learn with the Bayesian Principle
Soochan Lee, Hyeonseong Jeon, Jaehyeon Son, Gunhee Kim
On the Limits of Multi-modal Meta-Learning with Auxiliary Task Modulation Using Conditional Batch Normalization
Jordi Armengol-Estapé, Vincent Michalski, Ramnath Kumar, Pierre-Luc St-Charles, Doina Precup, Samira Ebrahimi Kahou
A CMDP-within-online framework for Meta-Safe Reinforcement Learning
Vanshaj Khattar, Yuhao Ding, Bilgehan Sel, Javad Lavaei, Ming Jin
Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models
Yongxian Wei, Zixuan Hu, Li Shen, Zhenyi Wang, Yu Li, Chun Yuan, Dacheng Tao
Perturbing the Gradient for Alleviating Meta Overfitting
Manas Gogoi, Sambhavi Tiwari, Shekhar Verma
Alzheimer's Magnetic Resonance Imaging Classification Using Deep and Meta-Learning Models
Nida Nasir, Muneeb Ahmed, Neda Afreen, Mustafa Sameer
Scrutinize What We Ignore: Reining In Task Representation Shift Of Context-Based Offline Meta Reinforcement Learning
Hai Zhang, Boyuan Zheng, Tianying Ji, Jinhang Liu, Anqi Guo, Junqiao Zhao, Lanqing Li