Visual SLAM
Visual SLAM (Simultaneous Localization and Mapping) aims to build a map of an environment while simultaneously tracking a camera's position within it, using only visual input. Current research emphasizes improving robustness and efficiency, particularly in challenging conditions like low light, dynamic environments, and those with limited texture, often employing hybrid direct-indirect methods, deep learning for feature extraction and matching, and novel map representations such as 3D Gaussian splatting. These advancements are crucial for applications in robotics, augmented reality, and autonomous navigation, enabling more reliable and adaptable systems in diverse and complex settings.
Papers
November 13, 2024
November 11, 2024
October 14, 2024
October 8, 2024
October 7, 2024
October 5, 2024
September 25, 2024
September 20, 2024
August 28, 2024
August 21, 2024
August 15, 2024
August 10, 2024
August 7, 2024
July 17, 2024
May 27, 2024
May 23, 2024
May 12, 2024
May 6, 2024