Federated Prompt
Federated Prompt Learning (FPL) adapts large pre-trained models to diverse, decentralized datasets without directly sharing sensitive data, focusing on efficiently tuning prompts rather than the entire model. Current research emphasizes balancing personalization (adapting to individual client data) with generalization (maintaining robust performance across all clients), often employing techniques like low-rank adaptations, contrastive learning, and adaptive prompt tuning within federated averaging or gradient descent frameworks. This approach offers significant advantages in privacy-preserving collaborative learning, particularly for resource-constrained devices and applications involving sensitive data, such as medical imaging or personalized language models.
Papers
Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models
Zihao Lin, Yan Sun, Yifan Shi, Xueqian Wang, Lifu Huang, Li Shen, Dacheng Tao
Inclusive Data Representation in Federated Learning: A Novel Approach Integrating Textual and Visual Prompt
Zihao Zhao, Zhenpeng Shi, Yang Liu, Wenbo Ding