Multi Modal Sensing

Multi-modal sensing integrates data from diverse sensor types (e.g., cameras, LiDAR, IMUs, radar) to achieve more robust and comprehensive perception than single-modality approaches. Current research emphasizes developing efficient fusion techniques, including graph neural networks and adaptive training strategies that handle incomplete or noisy data, often focusing on improving accuracy and energy efficiency in applications like autonomous navigation, human activity recognition, and healthcare monitoring. This field is significant for advancing robotics, autonomous systems, and healthcare technologies by enabling more reliable and context-aware systems capable of operating in complex and dynamic environments.

Papers