Allocentric Semantic Map
Allocentric semantic maps represent environments from a bird's-eye view, independent of the observer's perspective, aiming to create comprehensive, spatially-aware scene representations. Current research focuses on generating these maps from various input modalities, including egocentric videos, panoramic images, and data from distributed sensor networks, employing techniques like transformers and convolutional neural networks for efficient processing and fusion of multi-modal data. This work is significant for advancing robotics, autonomous navigation, and augmented reality applications by enabling robots and other agents to understand and interact with their surroundings more effectively.
Papers
June 28, 2024
February 26, 2024
December 7, 2023
March 21, 2023
March 7, 2023
October 18, 2022
July 13, 2022