Communication Efficiency
Communication efficiency in machine learning focuses on minimizing the data transmitted during model training and inference, particularly crucial in distributed and federated settings where bandwidth is limited. Current research emphasizes techniques like model compression (e.g., using frequency-space transformations, quantization, or sparse updates), efficient aggregation algorithms (e.g., FedAvg variants, ADMM), and the development of novel optimizers (e.g., Lion) designed for reduced communication overhead. These advancements are vital for scaling machine learning to larger datasets and more numerous devices, improving the practicality and energy efficiency of applications ranging from federated learning to multi-agent systems and IoT networks.
Papers
Communication Efficient and Provable Federated Unlearning
Youming Tao, Cheng-Long Wang, Miao Pan, Dongxiao Yu, Xiuzhen Cheng, Di Wang
T2MAC: Targeted and Trusted Multi-Agent Communication through Selective Engagement and Evidence-Driven Integration
Chuxiong Sun, Zehua Zang, Jiabao Li, Jiangmeng Li, Xiao Xu, Rui Wang, Changwen Zheng