Centralized Machine Learning
Centralized machine learning (ML) faces challenges related to data privacy, security, and computational efficiency, prompting research into decentralized alternatives like federated learning. Current research focuses on developing robust and efficient federated learning algorithms, including variations of gradient descent methods and novel aggregation techniques, to address issues like data heterogeneity, communication overhead, and adversarial attacks. This shift towards decentralized approaches is significant because it enables collaborative model training across multiple entities while preserving data privacy and reducing the computational burden on central servers, with applications spanning healthcare, network security, and personalized recommendations.
Papers
Towards Communication-efficient Federated Learning via Sparse and Aligned Adaptive Optimization
Xiumei Deng, Jun Li, Kang Wei, Long Shi, Zeihui Xiong, Ming Ding, Wen Chen, Shi Jin, H. Vincent Poor
Generative AI Enhances Team Performance and Reduces Need for Traditional Teams
Ning Li, Huaikang Zhou, Kris Mikel-Hong