Diffusion Guidance
Diffusion guidance leverages the power of diffusion models to steer the generation process towards desired outcomes, enhancing control and fidelity in various applications. Current research focuses on integrating diffusion guidance with different model architectures, such as neural radiance fields for 3D asset creation and various neural networks for image generation and reinforcement learning, often employing techniques like classifier-free guidance or spatially-aware score distillation to improve efficiency and control. This approach significantly impacts fields ranging from drug design (improving binding affinity prediction) to image synthesis (achieving finer-grained control over style and composition), demonstrating its broad utility across diverse scientific and engineering domains. The ability to precisely guide the generation process promises to improve the quality and controllability of outputs in numerous applications.
Papers
ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware Diffusion Guidance
Yongwei Chen, Tengfei Wang, Tong Wu, Xingang Pan, Kui Jia, Ziwei Liu
Understanding and Improving Training-free Loss-based Diffusion Guidance
Yifei Shen, Xinyang Jiang, Yezhen Wang, Yifan Yang, Dongqi Han, Dongsheng Li
Diffusion Model Based Visual Compensation Guidance and Visual Difference Analysis for No-Reference Image Quality Assessment
Zhaoyang Wang, Bo Hu, Mingyang Zhang, Jie Li, Leida Li, Maoguo Gong, Xinbo Gao
Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion
Yujia Huang, Adishree Ghatare, Yuanzhe Liu, Ziniu Hu, Qinsheng Zhang, Chandramouli S Sastry, Siddharth Gururani, Sageev Oore, Yisong Yue