Semantic Fusion
Semantic fusion integrates information from multiple sources, such as text, images, and sensor data, to create a more comprehensive and nuanced understanding of a given subject. Current research focuses on developing effective fusion methods within various architectures, including transformers, recurrent neural networks, and graph neural networks, often leveraging pre-trained models like Segment Anything Model (SAM) to enhance performance and reduce training needs. This field is significant for advancing numerous applications, including improved object detection, scene understanding, recommendation systems, and content moderation, by enabling more robust and accurate analysis of complex multimodal data.
Papers
November 10, 2024
September 7, 2024
August 27, 2024
July 31, 2024
June 24, 2024
June 15, 2024
May 31, 2024
March 25, 2024
November 30, 2023
September 18, 2023
July 8, 2023
May 23, 2023
March 18, 2023
February 23, 2023
December 9, 2022
October 18, 2022
August 3, 2022
July 12, 2022
May 26, 2022