3D Reconstruction
3D reconstruction aims to create three-dimensional models from various two-dimensional data sources, such as images or videos, with applications spanning diverse fields. Current research emphasizes improving accuracy and efficiency, particularly in challenging scenarios like sparse viewpoints, dynamic scenes, and occluded objects. Popular approaches utilize neural radiance fields (NeRFs), Gaussian splatting, and other deep learning architectures, often incorporating techniques like active view selection and multi-view stereo to enhance reconstruction quality. These advancements are driving progress in areas such as robotics, medical imaging, and remote sensing, enabling more accurate and detailed 3D models for various applications.
Papers
GeoWizard: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image
Xiao Fu, Wei Yin, Mu Hu, Kaixuan Wang, Yuexin Ma, Ping Tan, Shaojie Shen, Dahua Lin, Xiaoxiao Long
GNeRP: Gaussian-guided Neural Reconstruction of Reflective Objects with Noisy Polarization Priors
LI Yang, WU Ruizheng, LI Jiyong, CHEN Ying-cong
Fed3DGS: Scalable 3D Gaussian Splatting with Federated Learning
Teppei Suzuki
ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image
Marco Pesavento, Yuanlu Xu, Nikolaos Sarafianos, Robert Maier, Ziyan Wang, Chun-Han Yao, Marco Volino, Edmond Boyer, Adrian Hilton, Tony Tung
SCILLA: SurfaCe Implicit Learning for Large Urban Area, a volumetric hybrid solution
Hala Djeghim, Nathan Piasco, Moussab Bennehar, Luis Roldão, Dzmitry Tsishkou, Désiré Sidibé