Prompt Based Learning
Prompt-based learning leverages pre-trained language models by providing carefully crafted prompts to guide their performance on downstream tasks, particularly in low-resource settings. Current research focuses on optimizing prompt design, including exploring various prompt architectures and algorithms for improved accuracy and efficiency across diverse applications like disease diagnosis, causal discovery, and code summarization. This approach offers a powerful alternative to traditional fine-tuning, reducing computational costs and data requirements while enhancing model adaptability and interpretability, with significant implications for various fields.
Papers
On Measuring Social Biases in Prompt-Based Multi-Task Learning
Afra Feyza Akyürek, Sejin Paik, Muhammed Yusuf Kocyigit, Seda Akbiyik, Şerife Leman Runyun, Derry Wijaya
Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt
Jiangmeng Li, Wenyi Mo, Wenwen Qiang, Bing Su, Changwen Zheng, Hui Xiong, Ji-Rong Wen