Paper ID: 2503.18671 • Published Mar 24, 2025
Structure-Aware Correspondence Learning for Relative Pose Estimation
Yihan Chen, Wenfei Yang, Huan Ren, Shifeng Zhang, Tianzhu Zhang, Feng Wu
University of Science and Technology of China•National Key Laboratory of Deep Space Exploration•Sangfor Technologies
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Relative pose estimation provides a promising way for achieving
object-agnostic pose estimation. Despite the success of existing 3D
correspondence-based methods, the reliance on explicit feature matching suffers
from small overlaps in visible regions and unreliable feature estimation for
invisible regions. Inspired by humans' ability to assemble two object parts
that have small or no overlapping regions by considering object structure, we
propose a novel Structure-Aware Correspondence Learning method for Relative
Pose Estimation, which consists of two key modules. First, a structure-aware
keypoint extraction module is designed to locate a set of kepoints that can
represent the structure of objects with different shapes and appearance, under
the guidance of a keypoint based image reconstruction loss. Second, a
structure-aware correspondence estimation module is designed to model the
intra-image and inter-image relationships between keypoints to extract
structure-aware features for correspondence estimation. By jointly leveraging
these two modules, the proposed method can naturally estimate 3D-3D
correspondences for unseen objects without explicit feature matching for
precise relative pose estimation. Experimental results on the CO3D, Objaverse
and LineMOD datasets demonstrate that the proposed method significantly
outperforms prior methods, i.e., with 5.7{\deg}reduction in mean angular error
on the CO3D dataset.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.