Paper ID: 2503.08678 • Published Mar 11, 2025
GarmentCrafter: Progressive Novel View Synthesis for Single-View 3D Garment Reconstruction and Editing
Yuanhao Wang, Cheng Zhang, Gonçalo Frazão, Jinlong Yang, Alexandru-Eugen Ichim, Thabo Beeler, Fernando De la Torre
Carnegie Mellon University•Texas A&M University•Google AR
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
We introduce GarmentCrafter, a new approach that enables non-professional
users to create and modify 3D garments from a single-view image. While recent
advances in image generation have facilitated 2D garment design, creating and
editing 3D garments remains challenging for non-professional users. Existing
methods for single-view 3D reconstruction often rely on pre-trained generative
models to synthesize novel views conditioning on the reference image and camera
pose, yet they lack cross-view consistency, failing to capture the internal
relationships across different views. In this paper, we tackle this challenge
through progressive depth prediction and image warping to approximate novel
views. Subsequently, we train a multi-view diffusion model to complete occluded
and unknown clothing regions, informed by the evolving camera pose. By jointly
inferring RGB and depth, GarmentCrafter enforces inter-view coherence and
reconstructs precise geometries and fine details. Extensive experiments
demonstrate that our method achieves superior visual fidelity and inter-view
coherence compared to state-of-the-art single-view 3D garment reconstruction
methods.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.