Large Scale SLAM
Large-scale simultaneous localization and mapping (SLAM) aims to build accurate 3D maps of extensive environments while simultaneously tracking a robot's or sensor's position within them. Current research focuses on improving robustness and efficiency using various approaches, including neural field representations (e.g., view-centric mapping), optimized LiDAR processing with motion compensation, and novel visual SLAM techniques employing self-supervised vector quantization and deep learning for loop closure detection. These advancements are crucial for enabling autonomous navigation in diverse and challenging environments, with applications ranging from underwater exploration and autonomous driving to indoor mapping and social distancing monitoring.