Shot Learning
Few-shot learning (FSL) aims to train machine learning models that can effectively learn new concepts or tasks from only a small number of examples, addressing the limitations of traditional methods requiring massive datasets. Current research focuses on improving model robustness to noisy data and heterogeneous tasks, exploring architectures like prototypical networks and meta-learning algorithms, and leveraging large vision-language models and external memory for enhanced performance. This field is crucial for advancing AI in data-scarce domains like medical image analysis and personalized medicine, where acquiring large labeled datasets is often impractical or impossible. The development of efficient and reliable FSL methods has significant implications for various applications, including object detection, natural language processing, and other areas where labeled data is limited.
Papers
Sample-Efficient Learning of Novel Visual Concepts
Sarthak Bhagat, Simon Stepputtis, Joseph Campbell, Katia Sycara
Few-shot bioacoustic event detection at the DCASE 2023 challenge
Ines Nolasco, Burooj Ghani, Shubhr Singh, Ester Vidaña-Vila, Helen Whitehead, Emily Grout, Michael Emmerson, Frants Jensen, Ivan Kiskin, Joe Morford, Ariana Strandburg-Peshkin, Lisa Gill, Hanna Pamuła, Vincent Lostanlen, Dan Stowell
MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations
Calum Heggan, Tim Hospedales, Sam Budgett, Mehrdad Yaghoobi
The Rise of AI Language Pathologists: Exploring Two-level Prompt Learning for Few-shot Weakly-supervised Whole Slide Image Classification
Linhao Qu, Xiaoyuan Luo, Kexue Fu, Manning Wang, Zhijian Song