Camera Fusion
Camera fusion integrates data from multiple cameras, or cameras with other sensors like radar or LiDAR, to improve perception and overcome individual sensor limitations. Current research emphasizes developing robust deep learning models, often employing transformer architectures or multi-task learning strategies, to effectively fuse data from diverse sources, such as wide-angle and telephoto cameras, or cameras with radar, addressing challenges like data synthesis, sensor misalignment, and occlusion. This work is crucial for advancing applications in autonomous driving, mobile photography, and other fields requiring high-quality, reliable visual information, particularly in challenging environments.
Papers
October 24, 2024
August 22, 2024
April 16, 2024
April 9, 2024
January 2, 2024
December 18, 2023
September 27, 2023
July 18, 2023
July 3, 2023
February 21, 2023
September 25, 2022
August 25, 2022
July 23, 2022
July 5, 2022
April 24, 2022
February 24, 2022