Relative Pose
Relative pose estimation focuses on determining the spatial orientation of one object or camera relative to another, a fundamental problem across numerous fields. Current research emphasizes robust and efficient methods, exploring diverse approaches such as those based on geometric constraints (e.g., using point and line features, epipolar geometry), deep learning architectures (e.g., convolutional neural networks, transformers), and fusion techniques combining visual and inertial data. These advancements are crucial for applications ranging from autonomous navigation (e.g., for UAVs and underwater vehicles) to robotics, augmented reality, and space exploration, enabling improved coordination and perception in complex environments.
Papers
AIVIO: Closed-loop, Object-relative Navigation of UAVs with AI-aided Visual Inertial Odometry
Thomas Jantos, Martin Scheiber, Christian Brommer, Eren Allak, Stephan Weiss, Jan Steinbrener
Are Minimal Radial Distortion Solvers Necessary for Relative Pose Estimation?
Charalambos Tzamos, Viktor Kocur, Yaqing Ding, Torsten Sattler, Zuzana Kukelova