Score Distillation
Score distillation leverages pre-trained diffusion models to efficiently generate new data, primarily images and 3D models, by transferring the learned "score" function (representing the probability density) to a smaller, faster student model. Current research focuses on improving the speed and quality of this transfer, addressing issues like mode collapse, over-smoothing, and view inconsistencies, often through novel loss functions and optimization strategies within frameworks like variational score distillation and denoising diffusion implicit models. This technique is significant for accelerating generative AI and enabling applications such as text-to-3D generation, image editing, and solving inverse problems in various fields, particularly where training data is scarce.
Papers
InfiniDreamer: Arbitrarily Long Human Motion Generation via Segment Score Distillation
Wenjie Zhuo, Fan Ma, Hehe Fan
TSD-SR: One-Step Diffusion with Target Score Distillation for Real-World Image Super-Resolution
Linwei Dong, Qingnan Fan, Yihong Guo, Zhonghao Wang, Qi Zhang, Jinwei Chen, Yawei Luo, Changqing Zou
ModeDreamer: Mode Guiding Score Distillation for Text-to-3D Generation using Reference Image Prompts
Uy Dieu Tran, Minh Luu, Phong Ha Nguyen, Khoi Nguyen, Binh-Son Hua