Diffusion Framework
Diffusion frameworks are a rapidly evolving class of generative models that leverage a process of adding and removing noise to learn complex data distributions. Current research focuses on improving efficiency, controllability, and application to diverse data types, including images, graphs, time series, and medical images, often incorporating architectures like U-Nets and transformers. These models are proving valuable for tasks ranging from image generation and editing to combinatorial optimization and medical image segmentation, pushing the boundaries of generative modeling and impacting various scientific and practical domains. The ability to generate high-quality samples and handle diverse data modalities makes diffusion frameworks a significant advancement in machine learning.
Papers
Multi-Source Encapsulation With Guaranteed Convergence Using Minimalist Robots
Himani Sinhmar, Hadas Kress-Gazit
A Survey on Diffusion Models for Time Series and Spatio-Temporal Data
Yiyuan Yang, Ming Jin, Haomin Wen, Chaoli Zhang, Yuxuan Liang, Lintao Ma, Yi Wang, Chenghao Liu, Bin Yang, Zenglin Xu, Jiang Bian, Shirui Pan, Qingsong Wen