Diffusion Prior
Diffusion priors leverage the powerful generative capabilities of pre-trained diffusion models to enhance various image and 3D reconstruction tasks. Current research focuses on integrating these priors into diverse applications, including image restoration (e.g., super-resolution, inpainting, colorization), 3D scene reconstruction (e.g., NeRFs, Gaussian splatting), and inverse problems (e.g., blind deblurring, material recovery). This approach improves the quality and efficiency of these tasks by providing strong, data-driven priors that guide the reconstruction process, often surpassing methods relying solely on traditional optimization or supervised learning. The resulting advancements have significant implications for computer vision, medical imaging, and other fields requiring high-quality image and 3D model generation from limited or noisy data.
Papers
InFusion: Inpainting 3D Gaussians via Learning Depth Completion from Diffusion Prior
Zhiheng Liu, Hao Ouyang, Qiuyu Wang, Ka Leong Cheng, Jie Xiao, Kai Zhu, Nan Xue, Yu Liu, Yujun Shen, Yang Cao
IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination
Xi Chen, Sida Peng, Dongchen Yang, Yuan Liu, Bowen Pan, Chengfei Lv, Xiaowei Zhou
GeoWizard: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image
Xiao Fu, Wei Yin, Mu Hu, Kaixuan Wang, Yuexin Ma, Ping Tan, Shaojie Shen, Dahua Lin, Xiaoxiao Long
BAGS: Building Animatable Gaussian Splatting from a Monocular Video with Diffusion Priors
Tingyang Zhang, Qingzhe Gao, Weiyu Li, Libin Liu, Baoquan Chen