Novel Multimodal
Novel multimodal research focuses on integrating diverse data types (e.g., text, images, audio, sensor data) to improve model performance and understanding in various applications. Current efforts concentrate on developing and refining multimodal large language models (LLMs) and incorporating techniques like attention mechanisms and parameter-efficient fine-tuning to enhance both accuracy and generalizability across tasks. This work is significant for advancing artificial intelligence capabilities in fields ranging from healthcare diagnostics and industrial maintenance to educational technology and financial crime detection, ultimately leading to more robust and insightful systems.
Papers
October 21, 2024
August 21, 2024
August 20, 2024
July 17, 2024
March 21, 2024
February 16, 2024
January 22, 2024
December 1, 2023
October 20, 2023
October 3, 2023
August 19, 2023
June 5, 2023