Federated Prompt

Federated Prompt Learning (FPL) adapts large pre-trained models to diverse, decentralized datasets without directly sharing sensitive data, focusing on efficiently tuning prompts rather than the entire model. Current research emphasizes balancing personalization (adapting to individual client data) with generalization (maintaining robust performance across all clients), often employing techniques like low-rank adaptations, contrastive learning, and adaptive prompt tuning within federated averaging or gradient descent frameworks. This approach offers significant advantages in privacy-preserving collaborative learning, particularly for resource-constrained devices and applications involving sensitive data, such as medical imaging or personalized language models.

Papers