Paper ID: 2407.21033
Multi-Grained Query-Guided Set Prediction Network for Grounded Multimodal Named Entity Recognition
Jielong Tang, Zhenxing Wang, Ziyang Gong, Jianxing Yu, Shuang Wang, Jian Yin
Grounded Multimodal Named Entity Recognition (GMNER) is an emerging information extraction (IE) task, aiming to simultaneously extract entity spans, types, and entity-matched bounding box groundings in images from given sentence-image pairs data. Recent unified methods employing machine reading comprehension (MRC-based) frameworks or sequence generation-based models face challenges in understanding the relationships of multimodal entities. MRC-based frameworks, utilizing human-designed queries, struggle to model intra-entity connections. Meanwhile, sequence generation-based outputs excessively rely on inter-entity dependencies due to pre-defined decoding order. To tackle these, we propose a novel unified framework named Multi-grained Query-guided Set Prediction Network (MQSPN) to learn appropriate relationships at intra-entity and inter-entity levels. Specifically, MQSPN consists of a Multi-grained Query Set (MQS) and a Multimodal Set Prediction Network (MSP). MQS combines specific type-grained and learnable entity-grained queries to adaptively strengthen intra-entity connections by explicitly aligning visual regions with textual spans. Based on solid intra-entity modeling, MSP reformulates GMNER as a set prediction, enabling the parallel prediction of multimodal entities in a non-autoregressive manner, eliminating redundant dependencies from preceding sequences, and guiding models to establish appropriate inter-entity relationships from a global matching perspective. Additionally, to boost better alignment of two-level relationships, we also incorporate a Query-guided Fusion Net (QFNet) to work as a glue network between MQS and MSP. Extensive experiments demonstrate that our approach achieves state-of-the-art performances in widely used benchmarks. Notably, our method improves 2.83% F1 in the difficult fine-grained GMNER benchmark.
Submitted: Jul 17, 2024