Sensor Fusion
Sensor fusion integrates data from multiple sensors to enhance the accuracy, robustness, and reliability of perception systems. Current research emphasizes developing efficient and robust fusion algorithms, often employing deep learning architectures like convolutional neural networks (CNNs) and transformers, as well as Kalman filters and other probabilistic methods, to handle diverse sensor modalities (e.g., camera, LiDAR, radar, inertial sensors) and address challenges like sensor misalignment and label uncertainty. This field is crucial for advancing autonomous vehicles, robotics, and other applications requiring accurate and reliable real-time environmental understanding.
Papers
Condition-Aware Multimodal Fusion for Robust Semantic Perception of Driving Scenes
Tim Broedermann, Christos Sakaridis, Yuqian Fu, Luc Van Gool
SMART-TRACK: A Novel Kalman Filter-Guided Sensor Fusion For Robust UAV Object Tracking in Dynamic Environments
Khaled Gabr, Mohamed Abdelkader, Imen Jarraya, Abdullah AlMusalami, Anis Koubaa