Multi Modal Sensing
Multi-modal sensing integrates data from diverse sensor types (e.g., cameras, LiDAR, IMUs, radar) to achieve more robust and comprehensive perception than single-modality approaches. Current research emphasizes developing efficient fusion techniques, including graph neural networks and adaptive training strategies that handle incomplete or noisy data, often focusing on improving accuracy and energy efficiency in applications like autonomous navigation, human activity recognition, and healthcare monitoring. This field is significant for advancing robotics, autonomous systems, and healthcare technologies by enabling more reliable and context-aware systems capable of operating in complex and dynamic environments.
Papers
October 31, 2024
October 30, 2024
October 14, 2024
October 1, 2024
September 17, 2024
July 25, 2024
June 25, 2024
April 26, 2024
January 30, 2024
November 7, 2023
July 2, 2023
January 26, 2023
November 17, 2022
September 13, 2022
April 18, 2022
February 3, 2022