Paper ID: 2410.22089

InLINE: Inner-Layer Information Exchange for Multi-task Learning on Heterogeneous Graphs

Xinyue Feng, Jinquan Hang, Yuequn Zhang, Haotian Wang, Desheng Zhang, Guang Wang

Heterogeneous graph is an important structure for modeling complex relational data in real-world scenarios and usually involves various node prediction tasks within a single graph. Training these tasks separately may neglect beneficial information sharing, hence a preferred way is to learn several tasks in a same model by Multi-Task Learning (MTL). However, MTL introduces the issue of negative transfer, where the training of different tasks interferes with each other as they may focus on different information from the data, resulting in suboptimal performance. To solve the issue, existing MTL methods use separate backbones for each task, then selectively exchange beneficial features through interactions among the output embeddings from each layer of different backbones, which we refer to as outer-layer exchange. However, the negative transfer in heterogeneous graphs arises not simply from the varying importance of an individual node feature across tasks, but also from the varying importance of inter-relation between two nodes across tasks. These inter-relations are entangled in the output embedding, making it difficult for existing methods to discriminate beneficial information from the embedding. To address this challenge, we propose the Inner-Layer Information Exchange (InLINE) model that facilitate fine-grained information exchanges within each graph layer rather than through output embeddings. Specifically, InLINE consists of (1) Structure Disentangled Experts for layer-wise structure disentanglement, (2) Structure Disentangled Gates for assigning disentangled information to different tasks. Evaluations on two public datasets and a large industry dataset show that our model effectively alleviates the significant performance drop on specific tasks caused by negative transfer, improving Macro F1 by 6.3% on DBLP dataset and AUC by 3.6% on the industry dataset compared to SoA methods.

Submitted: Oct 29, 2024