Prompt Based Few Shot
Prompt-based few-shot learning aims to leverage the power of large language models (LLMs) for various tasks using only a small number of training examples, thereby reducing the need for extensive data annotation and model fine-tuning. Current research focuses on improving the design of prompts, exploring different LLM architectures (including GPT models and ELECTRA), and developing data augmentation techniques to enhance performance in few-shot scenarios. This approach holds significant promise for improving efficiency and generalizability in natural language processing, impacting fields like healthcare, education, and information retrieval by enabling the application of powerful LLMs to data-scarce domains.
Papers
LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning
Amirhossein Abaskohi, Sascha Rothe, Yadollah Yaghoobzadeh
The Rise of AI Language Pathologists: Exploring Two-level Prompt Learning for Few-shot Weakly-supervised Whole Slide Image Classification
Linhao Qu, Xiaoyuan Luo, Kexue Fu, Manning Wang, Zhijian Song