Paper ID: 2206.09549
Cooperative Edge Caching via Multi Agent Reinforcement Learning in Fog Radio Access Networks
Qi Chang, Yanxiang Jiang, Fu-Chun Zheng, Mehdi Bennis, Xiaohu You
In this paper, the cooperative edge caching problem in fog radio access networks (F-RANs) is investigated. To minimize the content transmission delay, we formulate the cooperative caching optimization problem to find the globally optimal caching strategy.By considering the non-deterministic polynomial hard (NP-hard) property of this problem, a Multi Agent Reinforcement Learning (MARL)-based cooperative caching scheme is proposed.Our proposed scheme applies double deep Q-network (DDQN) in every fog access point (F-AP), and introduces the communication process in multi-agent system. Every F-AP records the historical caching strategies of its associated F-APs as the observations of communication procedure.By exchanging the observations, F-APs can leverage the cooperation and make the globally optimal caching strategy.Simulation results show that the proposed MARL-based cooperative caching scheme has remarkable performance compared with the benchmark schemes in minimizing the content transmission delay.
Submitted: Jun 20, 2022