Paper ID: 2207.02074
Resource Allocation in Multicore Elastic Optical Networks: A Deep Reinforcement Learning Approach
Juan Pinto-Ríos, Felipe Calderón, Ariel Leiva, Gabriel Hermosilla, Alejandra Beghelli, Danilo Bórquez-Paredes, Astrid Lozada, Nicolás Jara, Ricardo Olivares, Gabriel Saavedra
A deep reinforcement learning approach is applied, for the first time, to solve the routing, modulation, spectrum and core allocation (RMSCA) problem in dynamic multicore fiber elastic optical networks (MCF-EONs). To do so, a new environment - compatible with OpenAI's Gym - was designed and implemented to emulate the operation of MCF-EONs. The new environment processes the agent actions (selection of route, core and spectrum slot) by considering the network state and physical-layer-related aspects. The latter includes the available modulation formats and their reach and the inter-core crosstalk (XT), an MCF-related impairment. If the resulting quality of the signal is acceptable, the environment allocates the resources selected by the agent. After processing the agent's action, the environment is configured to give the agent a numerical reward and information about the new network state. The blocking performance of four different agents was compared through simulation to 3 baseline heuristics used in MCF-EONs. Results obtained for the NSFNet and COST239 network topologies show that the best-performing agent achieves, on average, up to a four-times decrease in blocking probability concerning the best-performing baseline heuristic methods.
Submitted: Jul 5, 2022