Visual SLAM
Visual SLAM (Simultaneous Localization and Mapping) aims to build a map of an environment while simultaneously tracking a camera's position within it, using only visual input. Current research emphasizes improving robustness and efficiency, particularly in challenging conditions like low light, dynamic environments, and those with limited texture, often employing hybrid direct-indirect methods, deep learning for feature extraction and matching, and novel map representations such as 3D Gaussian splatting. These advancements are crucial for applications in robotics, augmented reality, and autonomous navigation, enabling more reliable and adaptable systems in diverse and complex settings.
Papers
July 8, 2023
June 12, 2023
May 28, 2023
May 17, 2023
March 2, 2023
February 8, 2023
December 31, 2022
December 13, 2022
December 5, 2022
November 27, 2022
November 12, 2022
October 15, 2022
October 11, 2022
October 10, 2022
October 5, 2022
September 27, 2022
September 17, 2022
September 5, 2022