Diffusion Prior
Diffusion priors leverage the powerful generative capabilities of pre-trained diffusion models to enhance various image and 3D reconstruction tasks. Current research focuses on integrating these priors into diverse applications, including image restoration (e.g., super-resolution, inpainting, colorization), 3D scene reconstruction (e.g., NeRFs, Gaussian splatting), and inverse problems (e.g., blind deblurring, material recovery). This approach improves the quality and efficiency of these tasks by providing strong, data-driven priors that guide the reconstruction process, often surpassing methods relying solely on traditional optimization or supervised learning. The resulting advancements have significant implications for computer vision, medical imaging, and other fields requiring high-quality image and 3D model generation from limited or noisy data.
Papers
Towards High-Fidelity 3D Portrait Generation with Rich Details by Cross-View Prior-Aware Diffusion
Haoran Wei, Wencheng Han, Xingping Dong, Jianbing Shen
Probabilistic Prior Driven Attention Mechanism Based on Diffusion Model for Imaging Through Atmospheric Turbulence
Guodong Sun, Qixiang Ma, Liqiang Zhang, Hongwei Wang, Zixuan Gao, Haotian Zhang