Shot Learning
Few-shot learning (FSL) aims to train machine learning models that can effectively learn new concepts or tasks from only a small number of examples, addressing the limitations of traditional methods requiring massive datasets. Current research focuses on improving model robustness to noisy data and heterogeneous tasks, exploring architectures like prototypical networks and meta-learning algorithms, and leveraging large vision-language models and external memory for enhanced performance. This field is crucial for advancing AI in data-scarce domains like medical image analysis and personalized medicine, where acquiring large labeled datasets is often impractical or impossible. The development of efficient and reliable FSL methods has significant implications for various applications, including object detection, natural language processing, and other areas where labeled data is limited.
Papers
Conservative Generator, Progressive Discriminator: Coordination of Adversaries in Few-shot Incremental Image Synthesis
Chaerin Kong, Nojun Kwak
A Survey of Learning on Small Data: Generalization, Optimization, and Challenge
Xiaofeng Cao, Weixin Bu, Shengjun Huang, Minling Zhang, Ivor W. Tsang, Yew Soon Ong, James T. Kwok
Tree Structure-Aware Few-Shot Image Classification via Hierarchical Aggregation
Min Zhang, Siteng Huang, Wenbin Li, Donglin Wang
Instance Selection Mechanisms for Human-in-the-Loop Systems in Few-Shot Learning
Johannes Jakubik, Benedikt Blumenstiel, Michael Vössing, Patrick Hemmer
Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning
Xingping Dong, Tianran Ouyang, Shengcai Liao, Bo Du, Ling Shao