Multiple Access
Multiple access techniques aim to efficiently share limited communication resources among multiple devices, a crucial challenge in distributed computing and the Internet of Things. Current research heavily focuses on optimizing federated learning over wireless networks, employing over-the-air computation and various multiple access schemes (e.g., TDMA, random access, NOMA) to aggregate data efficiently while mitigating interference and noise. This involves developing novel algorithms like deep reinforcement learning for resource allocation and robust gradient aggregation methods to improve model training accuracy and convergence speed. These advancements are vital for enabling large-scale distributed applications, particularly in areas like edge intelligence and the Metaverse, by reducing communication bottlenecks and enhancing system efficiency.
Papers
Communication-Efficient Federated Learning over Wireless Channels via Gradient Sketching
Vineet Sunil Gattani, Junshan Zhang, Gautam Dasarathy
NetworkGym: Reinforcement Learning Environments for Multi-Access Traffic Management in Network Simulation
Momin Haider, Ming Yin, Menglei Zhang, Arpit Gupta, Jing Zhu, Yu-Xiang Wang