Paper ID: 2303.11726
VMarker-Pro: Probabilistic 3D Human Mesh Estimation from Virtual Markers
Xiaoxuan Ma, Jiajun Su, Yuan Xu, Wentao Zhu, Chunyu Wang, Yizhou Wang
Monocular 3D human mesh estimation faces challenges due to depth ambiguity and the complexity of mapping images to complex parameter spaces. Recent methods propose to use 3D poses as a proxy representation, which often lose crucial body shape information, leading to mediocre performance. Conversely, advanced motion capture systems, though accurate, are impractical for markerless wild images. Addressing these limitations, we introduce an innovative intermediate representation as virtual markers, which are learned from large-scale mocap data, mimicking the effects of physical markers. Building upon virtual markers, we propose VMarker, which detects virtual markers from wild images, and the intact mesh with realistic shapes can be obtained by simply interpolation from these markers. To address occlusions that obscure 3D virtual marker estimation, we further enhance our method with VMarker-Pro, a probabilistic framework that generates multiple plausible meshes aligned with images. This framework models the 3D virtual marker estimation as a conditional denoising process, enabling robust 3D mesh estimation. Our approaches surpass existing methods on three benchmark datasets, particularly demonstrating significant improvements on the SURREAL dataset, which features diverse body shapes. Additionally, VMarker-Pro excels in accurately modeling data distributions, significantly enhancing performance in occluded scenarios. Code and models are available at https://github.com/ShirleyMaxx/VMarker-Pro.
Submitted: Mar 21, 2023