Federated Prompt Cooperation
Federated prompt cooperation explores collaborative machine learning across decentralized datasets, aiming to train shared models while preserving data privacy and addressing data heterogeneity. Current research focuses on adapting various model architectures, including variational autoencoders, transformers, and reinforcement learning algorithms, to this federated setting, often employing techniques like low-rank adaptation and prompt engineering to manage communication overhead and improve model personalization. This approach holds significant promise for applications requiring distributed data analysis, such as healthcare, industrial IoT, and social network analysis, by enabling the development of more accurate and robust models while respecting data privacy constraints.
Papers
Covariances for Free: Exploiting Mean Distributions for Federated Learning with Pre-Trained Models
Dipam Goswami, Simone Magistri, Kai Wang, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost van de Weijer
Federated Source-free Domain Adaptation for Classification: Weighted Cluster Aggregation for Unlabeled Data
Junki Mori, Kosuke Kihara, Taiki Miyagawa, Akinori F. Ebihara, Isamu Teranishi, Hisashi Kashima
Refined Analysis of Federated Averaging's Bias and Federated Richardson-Romberg Extrapolation
Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines
HumekaFL: Automated Detection of Neonatal Asphyxia Using Federated Learning
Pamely Zantou, Blessed Guda, Bereket Retta, Gladys Inabeza, Carlee Joe-Wong, Assane Gueye