Paper ID: 2210.07500
ToupleGDD: A Fine-Designed Solution of Influence Maximization by Deep Reinforcement Learning
Tiantian Chen, Siwen Yan, Jianxiong Guo, Weili Wu
Aiming at selecting a small subset of nodes with maximum influence on networks, the Influence Maximization (IM) problem has been extensively studied. Since it is #P-hard to compute the influence spread given a seed set, the state-of-the-art methods, including heuristic and approximation algorithms, faced with great difficulties such as theoretical guarantee, time efficiency, generalization, etc. This makes it unable to adapt to large-scale networks and more complex applications. On the other side, with the latest achievements of Deep Reinforcement Learning (DRL) in artificial intelligence and other fields, lots of works have been focused on exploiting DRL to solve combinatorial optimization problems. Inspired by this, we propose a novel end-to-end DRL framework, ToupleGDD, to address the IM problem in this paper, which incorporates three coupled graph neural networks for network embedding and double deep Q-networks for parameters learning. Previous efforts to solve IM problem with DRL trained their models on subgraphs of the whole network, and then tested on the whole graph, which makes the performance of their models unstable among different networks. However, our model is trained on several small randomly generated graphs with a small budget, and tested on completely different networks under various large budgets, which can obtain results very close to IMM and better results than OPIM-C on several datasets, and shows strong generalization ability. Finally, we conduct a large number of experiments on synthetic and realistic datasets, and experimental results prove the effectiveness and superiority of our model.
Submitted: Oct 14, 2022