Multi Modal Entity Alignment
Multi-modal entity alignment (MMEA) seeks to identify corresponding entities across different knowledge graphs that incorporate multiple data types, such as text, images, and relational links. Current research emphasizes robust fusion of these diverse modalities, often employing transformer-based architectures and contrastive learning methods to generate effective entity representations, while addressing challenges like modality-specific noise and missing data. Improved MMEA techniques will facilitate more comprehensive and accurate knowledge graph integration, impacting various applications including cross-lingual information retrieval, knowledge base completion, and enhanced multi-modal large language models.
Papers
October 18, 2024
July 29, 2024
July 27, 2024
July 23, 2024
April 19, 2024
March 11, 2024
March 2, 2024
February 29, 2024
October 10, 2023
October 9, 2023
July 30, 2023
April 4, 2023
March 1, 2023
February 17, 2023
December 29, 2022