Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
2251papers
Papers - Page 42
September 30, 2024
September 29, 2024
September 28, 2024
September 27, 2024
Pruning then Reweighting: Towards Data-Efficient Training of Diffusion Models
Convergence of Diffusion Models Under the Manifold Hypothesis in High-Dimensions
Unsupervised Fingerphoto Presentation Attack Detection With Diffusion Models
Treating Brain-inspired Memories as Priors for Diffusion Model to Forecast Multivariate Time Series
September 26, 2024
Trustworthy Text-to-Image Diffusion Models: A Timely and Focused Survey
Taming Diffusion Prior for Image Super-Resolution with Domain Shift SDEs
AnyLogo: Symbiotic Subject-Driven Diffusion System with Gemini Status
ID3: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition
Flexiffusion: Segment-wise Neural Architecture Search for Flexible Denoising Schedule
Learning Quantized Adaptive Conditions for Diffusion Models