Visual SLAM
Visual SLAM (Simultaneous Localization and Mapping) aims to build a map of an environment while simultaneously tracking a camera's position within it, using only visual input. Current research emphasizes improving robustness and efficiency, particularly in challenging conditions like low light, dynamic environments, and those with limited texture, often employing hybrid direct-indirect methods, deep learning for feature extraction and matching, and novel map representations such as 3D Gaussian splatting. These advancements are crucial for applications in robotics, augmented reality, and autonomous navigation, enabling more reliable and adaptable systems in diverse and complex settings.
Papers
April 22, 2024
April 1, 2024
March 28, 2024
March 21, 2024
March 12, 2024
February 22, 2024
February 21, 2024
February 9, 2024
January 19, 2024
January 17, 2024
January 16, 2024
January 2, 2024
December 12, 2023
December 11, 2023
November 28, 2023
September 26, 2023
September 18, 2023
September 15, 2023