Multi Sensor Fusion
Multi-sensor fusion integrates data from diverse sources (e.g., cameras, LiDAR, radar, IMUs) to improve the accuracy, robustness, and reliability of perception and state estimation in various applications, such as autonomous vehicles and robotics. Current research emphasizes efficient fusion architectures, including deep neural networks, factor graph optimization, and variational inference, often focusing on addressing challenges like data heterogeneity, sensor misalignment, and robustness to missing or corrupted data. These advancements are crucial for enabling reliable operation of autonomous systems in complex and unpredictable environments, impacting fields ranging from autonomous driving and robotics to environmental monitoring and healthcare.
Papers
Multi-sensor Learning Enables Information Transfer across Different Sensory Data and Augments Multi-modality Imaging
Lingting Zhu, Yizheng Chen, Lianli Liu, Lei Xing, Lequan Yu
EEPNet: Efficient Edge Pixel-based Matching Network for Cross-Modal Dynamic Registration between LiDAR and Camera
Yuanchao Yue, Hui Yuan, Suai Li, Qi Jiang