Shot Learning
Few-shot learning (FSL) aims to train machine learning models that can effectively learn new concepts or tasks from only a small number of examples, addressing the limitations of traditional methods requiring massive datasets. Current research focuses on improving model robustness to noisy data and heterogeneous tasks, exploring architectures like prototypical networks and meta-learning algorithms, and leveraging large vision-language models and external memory for enhanced performance. This field is crucial for advancing AI in data-scarce domains like medical image analysis and personalized medicine, where acquiring large labeled datasets is often impractical or impossible. The development of efficient and reliable FSL methods has significant implications for various applications, including object detection, natural language processing, and other areas where labeled data is limited.
Papers
Disentangled Generation with Information Bottleneck for Few-Shot Learning
Zhuohang Dang, Jihong Wang, Minnan Luo, Chengyou Jia, Caixia Yan, Qinghua Zheng
Better Generalized Few-Shot Learning Even Without Base Data
Seong-Woong Kim, Dong-Wan Choi
PatchMix Augmentation to Identify Causal Features in Few-shot Learning
Chengming Xu, Chen Liu, Xinwei Sun, Siqian Yang, Yabiao Wang, Chengjie Wang, Yanwei Fu
Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Xiang Fan, Yiwei Lyu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency
Few-shot Classification with Hypersphere Modeling of Prototypes
Ning Ding, Yulin Chen, Ganqu Cui, Xiaobin Wang, Hai-Tao Zheng, Zhiyuan Liu, Pengjun Xie