Multimodal Recommendation
Multimodal recommendation systems aim to improve recommendation accuracy by integrating diverse data modalities, such as text, images, and audio, alongside traditional user-item interaction data. Current research focuses on developing effective fusion techniques within various model architectures, including those leveraging large language and multimodal models, to address challenges like cold-start problems and modality imbalance. These advancements hold significant potential for enhancing personalized recommendations across various applications, from e-commerce and entertainment to news and information retrieval, by providing richer and more contextually relevant suggestions.
Papers
October 11, 2024
September 26, 2024
August 27, 2024
August 19, 2024
July 26, 2024
April 2, 2024
March 19, 2024
February 27, 2024
February 13, 2024
November 10, 2023
May 26, 2023