Diffusion Based Method
Diffusion-based methods are a class of generative models that leverage stochastic processes to create high-quality samples from complex data distributions, primarily focusing on image generation and manipulation. Current research emphasizes improving efficiency through single-step inference and novel architectures like those incorporating wavelets or residual networks, as well as enhancing control and fidelity by integrating techniques such as attention mechanisms, consistency models, and implicit guidance. These advancements are significantly impacting various fields, including image restoration, super-resolution, anomaly detection, and robotic manipulation, by offering faster and more accurate solutions to challenging inverse problems.
Papers
Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask
Zineb Senane, Lele Cao, Valentin Leonhard Buchner, Yusuke Tashiro, Lei You, Pawel Herman, Mats Nordahl, Ruibo Tu, Vilhelm von Ehrenheim
StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework
Yiheng Huang, Hui Yang, Chuanchen Luo, Yuxi Wang, Shibiao Xu, Zhaoxiang Zhang, Man Zhang, Junran Peng