Cross Modal Interaction
Cross-modal interaction research focuses on effectively integrating information from different data modalities (e.g., text, images, audio) to improve the performance of AI systems. Current research emphasizes developing novel architectures, such as multimodal transformers and graph neural networks, and innovative training paradigms like cross-modal denoising and alternating unimodal adaptation, to achieve better cross-modal alignment and feature fusion. This field is significant because improved cross-modal understanding is crucial for advancing applications in diverse areas, including image segmentation, robotics, and medical diagnosis, by enabling AI systems to process and interpret richer, more nuanced information.
Papers
October 22, 2024
October 11, 2024
September 29, 2024
September 20, 2024
August 15, 2024
July 10, 2024
June 21, 2024
May 16, 2024
May 15, 2024
April 18, 2024
April 3, 2024
March 11, 2024
January 29, 2024
January 22, 2024
December 28, 2023
December 27, 2023
December 11, 2023
November 17, 2023
November 9, 2023
November 8, 2023