Federated Reinforcement Learning
Federated Reinforcement Learning (FRL) aims to collaboratively train reinforcement learning agents across multiple decentralized devices without directly sharing their private data, addressing privacy concerns while leveraging distributed computational resources. Current research focuses on overcoming challenges posed by data heterogeneity across devices, employing algorithms like Federated Q-learning, policy gradient methods (including natural policy gradient and actor-critic variants), and addressing issues of convergence and communication efficiency through techniques such as momentum and ADMM. FRL's significance lies in its potential to enable large-scale, privacy-preserving applications in diverse fields, including recommendation systems, medical imaging, and resource allocation in networked systems like smart grids and V2X networks.
Papers
Momentum for the Win: Collaborative Federated Reinforcement Learning across Heterogeneous Environments
Han Wang, Sihong He, Zhili Zhang, Fei Miao, James Anderson
Federated Q-Learning with Reference-Advantage Decomposition: Almost Optimal Regret and Logarithmic Communication Cost
Zhong Zheng, Haochen Zhang, Lingzhou Xue