Relative Pose
Relative pose estimation focuses on determining the spatial orientation of one object or camera relative to another, a fundamental problem across numerous fields. Current research emphasizes robust and efficient methods, exploring diverse approaches such as those based on geometric constraints (e.g., using point and line features, epipolar geometry), deep learning architectures (e.g., convolutional neural networks, transformers), and fusion techniques combining visual and inertial data. These advancements are crucial for applications ranging from autonomous navigation (e.g., for UAVs and underwater vehicles) to robotics, augmented reality, and space exploration, enabling improved coordination and perception in complex environments.
Papers
Are Semi-Dense Detector-Free Methods Good at Matching Local Features?
Matthieu Vilain, Rémi Giraud, Hugo Germain, Guillaume Bourmaud
Gaussian-Sum Filter for Range-based 3D Relative Pose Estimation in the Presence of Ambiguities
Syed S. Ahmed, Mohammed A. Shalaby, Charles C. Cossette, Jerome Le Ny, James R. Forbes