Transformer Based Sensor Fusion
Transformer-based sensor fusion aims to improve the accuracy and robustness of perception systems, particularly in autonomous driving, by effectively combining data from multiple sensors (e.g., cameras, LiDAR). Current research heavily utilizes transformer architectures, often incorporating convolutional layers for feature extraction and employing multi-task learning to simultaneously address perception and control tasks. This approach shows significant promise in enhancing scene understanding, improving waypoint prediction, and increasing the safety and reliability of autonomous systems by handling asynchronous sensor data and mitigating the limitations of single-modality approaches.
Papers
October 19, 2023
August 4, 2023
February 22, 2023
July 28, 2022
June 26, 2022
May 31, 2022