Semantic Simultaneous Localization and Mapping

Semantic Simultaneous Localization and Mapping (SLAM) aims to build maps of an environment while simultaneously tracking a robot's location within it, using both geometric and semantic information about objects and scenes. Current research focuses on improving robustness in dynamic environments and handling challenging conditions like underwater settings, often employing neural networks (e.g., transformers) for object detection and semantic segmentation, coupled with probabilistic data association for accurate landmark matching. These advancements are crucial for enabling more reliable autonomous navigation in complex, unstructured environments, with applications ranging from robotics and autonomous vehicles to augmented reality and 3D modeling.

Papers