Visual SLAM
Visual SLAM (Simultaneous Localization and Mapping) aims to build a map of an environment while simultaneously tracking a camera's position within it, using only visual input. Current research emphasizes improving robustness and efficiency, particularly in challenging conditions like low light, dynamic environments, and those with limited texture, often employing hybrid direct-indirect methods, deep learning for feature extraction and matching, and novel map representations such as 3D Gaussian splatting. These advancements are crucial for applications in robotics, augmented reality, and autonomous navigation, enabling more reliable and adaptable systems in diverse and complex settings.
Papers
Challenges of SLAM in extremely unstructured environments: the DLR Planetary Stereo, Solid-State LiDAR, Inertial Dataset
Riccardo Giubilato, Wolfgang Stürzl, Armin Wedler, Rudolph Triebel
Semi-supervised Vector-Quantization in Visual SLAM using HGCN
Amir Zarringhalam, Saeed Shiry Ghidary, Ali Mohades Khorasani
Self-supervised Vector-Quantization in Visual SLAM using Deep Convolutional Autoencoders
Amir Zarringhalam, Saeed Shiry Ghidary, Ali Mohades Khorasani