Paper ID: 2207.08056
Federated Deep Reinforcement Learning for RIS-Assisted Indoor Multi-Robot Communication Systems
Ruyu Luo, Wanli Ni, Hui Tian, Julian Cheng
Indoor multi-robot communications face two key challenges: one is the severe signal strength degradation caused by blockages (e.g., walls) and the other is the dynamic environment caused by robot mobility. To address these issues, we consider the reconfigurable intelligent surface (RIS) to overcome the signal blockage and assist the trajectory design among multiple robots. Meanwhile, the non-orthogonal multiple access (NOMA) is adopted to cope with the scarcity of spectrum and enhance the connectivity of robots. Considering the limited battery capacity of robots, we aim to maximize the energy efficiency by jointly optimizing the transmit power of the access point (AP), the phase shifts of the RIS, and the trajectory of robots. A novel federated deep reinforcement learning (F-DRL) approach is developed to solve this challenging problem with one dynamic long-term objective. Through each robot planning its path and downlink power, the AP only needs to determine the phase shifts of the RIS, which can significantly save the computation overhead due to the reduced training dimension. Simulation results reveal the following findings: I) the proposed F-DRL can reduce at least 86% convergence time compared to the centralized DRL; II) the designed algorithm can adapt to the increasing number of robots; III) compared to traditional OMA-based benchmarks, NOMA-enhanced schemes can achieve higher energy efficiency.
Submitted: Jul 17, 2022