Device Scheduling
Device scheduling in distributed machine learning, particularly federated learning (FL), aims to optimize the selection and assignment of participating devices to minimize training time, energy consumption, and communication overhead while maximizing model accuracy. Current research focuses on developing efficient algorithms, often employing techniques like Lyapunov optimization, multi-armed bandits, reinforcement learning, and matching pursuit, to address challenges posed by heterogeneous devices, network constraints, and non-i.i.d. data. These advancements are crucial for enabling scalable and efficient FL deployments in resource-constrained environments like the Internet of Things (IoT), improving the practicality and performance of distributed AI applications. Furthermore, research explores novel frameworks like partial model aggregation to further enhance efficiency and address data heterogeneity.