Single View 3D Reconstruction
Single-view 3D reconstruction aims to create three-dimensional models from a single two-dimensional image, a fundamentally challenging inverse problem due to inherent ambiguities. Current research focuses on improving accuracy and efficiency through novel neural network architectures, such as those employing Gaussian splatting, transformers, and mesh deformation techniques, often incorporating physical constraints or multi-view consistency for enhanced realism. These advancements are significant for applications ranging from virtual try-ons and robotic manipulation to 3D modeling and scene understanding, pushing the boundaries of what's possible with limited visual input.
Papers
Physically Compatible 3D Object Modeling from a Single Image
Minghao Guo, Bohan Wang, Pingchuan Ma, Tianyuan Zhang, Crystal Elaine Owens, Chuang Gan, Joshua B. Tenenbaum, Kaiming He, Wojciech Matusik
A Pixel Is Worth More Than One 3D Gaussians in Single-View 3D Reconstruction
Jianghao Shen, Nan Xue, Tianfu Wu
TetSphere Splatting: Representing High-Quality Geometry with Lagrangian Volumetric Meshes
Minghao Guo, Bohan Wang, Kaiming He, Wojciech Matusik
Know Your Neighbors: Improving Single-View Reconstruction via Spatial Vision-Language Reasoning
Rui Li, Tobias Fischer, Mattia Segu, Marc Pollefeys, Luc Van Gool, Federico Tombari
MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation
Hanzhe Hu, Zhizhuo Zhou, Varun Jampani, Shubham Tulsiani
Generalizable 3D Scene Reconstruction via Divide and Conquer from a Single View
Andreea Dogaru, Mert Özer, Bernhard Egger