Federated Learning Training
Federated learning (FL) trains machine learning models on decentralized data, preserving privacy by avoiding direct data sharing. Current research focuses on improving FL efficiency and robustness through techniques like flexible model architectures adapting to varying client resources, novel aggregation methods handling diverse model structures (e.g., using soft prompts as messengers), and optimized participant selection strategies considering energy constraints and network conditions. These advancements are crucial for enabling practical FL deployment across diverse devices and applications, particularly in resource-limited environments and scenarios requiring strong model privacy.
Papers
June 14, 2024
November 12, 2023
September 24, 2023
November 24, 2022