Semantic Simultaneous Localization
Semantic Simultaneous Localization and Mapping (SLAM) aims to build maps of an environment while simultaneously tracking a robot's location within it, using both geometric and semantic information about objects. Current research emphasizes efficient object-based representations, often employing probabilistic graphical models and leveraging advances in object detection and classification from computer vision. This approach improves robustness in dynamic environments and enables more informative maps, impacting fields like autonomous navigation, robotics, and augmented reality by providing richer scene understanding and more accurate localization.
Papers
November 11, 2024
June 25, 2024
April 5, 2024
March 19, 2024
September 14, 2022
July 19, 2022
June 21, 2022