Prompt Learning
Prompt learning enhances the adaptability of pre-trained models, particularly large language and vision-language models, to diverse downstream tasks by learning or optimizing input prompts rather than retraining the entire model. Current research focuses on developing efficient prompt learning strategies, including techniques like soft prompt optimization, multi-modal prompt integration, and hierarchical prompt structures, often applied within architectures such as CLIP and various transformer-based models. This approach offers significant advantages in terms of computational efficiency and data requirements, impacting fields like recommendation systems, medical image analysis, and natural language processing by enabling rapid adaptation to new tasks and domains with limited resources.
Papers
CMAL: A Novel Cross-Modal Associative Learning Framework for Vision-Language Pre-Training
Zhiyuan Ma, Jianjun Li, Guohui Li, Kaiyan Huang
Adaptive Prompt Learning with SAM for Few-shot Scanning Probe Microscope Image Segmentation
Yao Shen, Ziwei Wei, Chunmeng Liu, Shuming Wei, Qi Zhao, Kaiyang Zeng, Guangyao Li
Understanding Expert Structures on Minimax Parameter Estimation in Contaminated Mixture of Experts
Fanqi Yan, Huy Nguyen, Dung Le, Pedram Akbarian, Nhat Ho