Ego Pose

Ego pose estimation focuses on determining an agent's position and orientation within its environment, primarily using egocentric (first-person) sensor data. Current research emphasizes robust methods for estimating this pose from various sensor modalities, including cameras, LiDAR, and IMUs, often employing deep learning architectures like diffusion models and neural networks to fuse data and handle noisy or intermittent observations. This field is crucial for advancing robotics, autonomous driving, and augmented/virtual reality applications by enabling accurate scene understanding and interaction, particularly in dynamic and unstructured environments.

Papers