Shot Learning
Few-shot learning (FSL) aims to train machine learning models that can effectively learn new concepts or tasks from only a small number of examples, addressing the limitations of traditional methods requiring massive datasets. Current research focuses on improving model robustness to noisy data and heterogeneous tasks, exploring architectures like prototypical networks and meta-learning algorithms, and leveraging large vision-language models and external memory for enhanced performance. This field is crucial for advancing AI in data-scarce domains like medical image analysis and personalized medicine, where acquiring large labeled datasets is often impractical or impossible. The development of efficient and reliable FSL methods has significant implications for various applications, including object detection, natural language processing, and other areas where labeled data is limited.
Papers
Gradient Boosting Trees and Large Language Models for Tabular Data Few-Shot Learning
Carlos Huertas
Quantum Diffusion Models for Few-Shot Learning
Ruhan Wang, Ye Wang, Jing Liu, Toshiaki Koike-Akino
A Contrastive Self-Supervised Learning scheme for beat tracking amenable to few-shot learning
Antonin Gagnere (LTCI, IDS, S2A), Geoffroy Peeters (LTCI, S2A, IDS), Slim Essid (IDS, S2A, LTCI)
Explainable few-shot learning workflow for detecting invasive and exotic tree species
Caroline M. Gevaert, Alexandra Aguiar Pedro, Ou Ku, Hao Cheng, Pranav Chandramouli, Farzaneh Dadrass Javan, Francesco Nattino, Sonja Georgievska
Fast Adaptation with Kernel and Gradient based Meta Leaning
JuneYoung Park, MinJae Kang