Multimodal Sensor
Multimodal sensor research focuses on integrating data from diverse sensor types (e.g., cameras, IMUs, LiDAR) to improve accuracy and robustness in applications like activity recognition, object localization, and autonomous driving. Current research emphasizes developing efficient and adaptable model architectures, such as masked autoencoders, conditional neural networks, and transformer-based approaches, to effectively fuse multimodal data and handle data heterogeneity. This field is significant because it enables more accurate and reliable systems in various domains, improving human-computer interaction, healthcare monitoring, and autonomous systems.
Papers
October 13, 2024
August 8, 2024
June 10, 2024
February 21, 2024
December 22, 2023
March 6, 2023
February 17, 2023
February 2, 2023
November 8, 2022
November 7, 2022
October 14, 2022
May 24, 2022