Paper ID: 2503.20220 • Published Mar 26, 2025
DINeMo: Learning Neural Mesh Models with no 3D Annotations
Weijie Guo, Guofeng Zhang, Wufei Ma, Alan Yuille
Johns Hopkins University
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Category-level 3D/6D pose estimation is a crucial step towards comprehensive
3D scene understanding, which would enable a broad range of applications in
robotics and embodied AI. Recent works explored neural mesh models that
approach a range of 2D and 3D tasks from an analysis-by-synthesis perspective.
Despite the largely enhanced robustness to partial occlusion and domain shifts,
these methods depended heavily on 3D annotations for part-contrastive learning,
which confines them to a narrow set of categories and hinders efficient
scaling. In this work, we present DINeMo, a novel neural mesh model that is
trained with no 3D annotations by leveraging pseudo-correspondence obtained
from large visual foundation models. We adopt a bidirectional
pseudo-correspondence generation method, which produce pseudo correspondence
utilize both local appearance features and global context information.
Experimental results on car datasets demonstrate that our DINeMo outperforms
previous zero- and few-shot 3D pose estimation by a wide margin, narrowing the
gap with fully-supervised methods by 67.3%. Our DINeMo also scales effectively
and efficiently when incorporating more unlabeled images during training, which
demonstrate the advantages over supervised learning methods that rely on 3D
annotations. Our project page is available at
this https URL
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.