Paper ID: 2209.13536

Transmit Power Control for Indoor Small Cells: A Method Based on Federated Reinforcement Learning

Peizheng Li, Hakan Erdol, Keith Briggs, Xiaoyang Wang, Robert Piechocki, Abdelrahim Ahmad, Rui Inacio, Shipra Kapoor, Angela Doufexi, Arjun Parekh

Setting the transmit power setting of 5G cells has been a long-term topic of discussion, as optimized power settings can help reduce interference and improve the quality of service to users. Recently, machine learning (ML)-based, especially reinforcement learning (RL)-based control methods have received much attention. However, there is little discussion about the generalisation ability of the trained RL models. This paper points out that an RL agent trained in a specific indoor environment is room-dependent, and cannot directly serve new heterogeneous environments. Therefore, in the context of Open Radio Access Network (O-RAN), this paper proposes a distributed cell power-control scheme based on Federated Reinforcement Learning (FRL). Models in different indoor environments are aggregated to the global model during the training process, and then the central server broadcasts the updated model back to each client. The model will also be used as the base model for adaptive training in the new environment. The simulation results show that the FRL model has similar performance to a single RL agent, and both are better than the random power allocation method and exhaustive search method. The results of the generalisation test show that using the FRL model as the base model improves the convergence speed of the model in the new environment.

Submitted: Aug 31, 2022