Forward Diffusion
Forward diffusion models generate data by reversing a process that gradually adds noise to data until it becomes pure noise, then learning to reconstruct the original data. Current research focuses on improving the efficiency and effectiveness of this process, exploring variations like structured forward processes for discrete data, learning the forward process itself using neural flows, and adapting the process for specific applications such as anomaly detection in medical images and 3D scene generation. These advancements aim to enhance the quality and speed of data generation across diverse domains, impacting fields like image synthesis, quantum computing, and graph generation.
Papers
Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive Diffusion
Kui Zhang, Hang Zhou, Jie Zhang, Qidong Huang, Weiming Zhang, Nenghai Yu
Dimensionality-Varying Diffusion Process
Han Zhang, Ruili Feng, Zhantao Yang, Lianghua Huang, Yu Liu, Yifei Zhang, Yujun Shen, Deli Zhao, Jingren Zhou, Fan Cheng