Shot Learning
Few-shot learning (FSL) aims to train machine learning models that can effectively learn new concepts or tasks from only a small number of examples, addressing the limitations of traditional methods requiring massive datasets. Current research focuses on improving model robustness to noisy data and heterogeneous tasks, exploring architectures like prototypical networks and meta-learning algorithms, and leveraging large vision-language models and external memory for enhanced performance. This field is crucial for advancing AI in data-scarce domains like medical image analysis and personalized medicine, where acquiring large labeled datasets is often impractical or impossible. The development of efficient and reliable FSL methods has significant implications for various applications, including object detection, natural language processing, and other areas where labeled data is limited.
Papers
UniFS: Universal Few-shot Instance Perception with Point Representations
Sheng Jin, Ruijie Yao, Lumin Xu, Wentao Liu, Chen Qian, Ji Wu, Ping Luo
PEFSL: A deployment Pipeline for Embedded Few-Shot Learning on a FPGA SoC
Lucas Grativol Ribeiro, Lubin Gauthier, Mathieu Leonardon, Jérémy Morlier, Antoine Lavrard-Meyer, Guillaume Muller, Virginie Fresse, Matthieu Arzel
Robust Few-Shot Ensemble Learning with Focal Diversity-Based Pruning
Selim Furkan Tekin, Fatih Ilhan, Tiansheng Huang, Sihao Hu, Ka-Ho Chow, Margaret L. Loper, Ling Liu
No Time to Train: Empowering Non-Parametric Networks for Few-shot 3D Scene Segmentation
Xiangyang Zhu, Renrui Zhang, Bowei He, Ziyu Guo, Jiaming Liu, Han Xiao, Chaoyou Fu, Hao Dong, Peng Gao