Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
2251papers
Papers - Page 31
November 16, 2024
November 15, 2024
Probabilistic Prior Driven Attention Mechanism Based on Diffusion Model for Imaging Through Atmospheric Turbulence
DR-BFR: Degradation Representation with Diffusion Models for Blind Face Restoration
The Unreasonable Effectiveness of Guidance for Diffusion Models
ColorEdit: Training-free Image-Guided Color editing with diffusion model
Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training
November 14, 2024
November 13, 2024
November 12, 2024
Scaling Properties of Diffusion Models for Perceptual Tasks
Structured Pattern Expansion with Diffusion Models
Novel View Synthesis with Pixel-Space Diffusion Models
Unraveling the Connections between Flow Matching and Diffusion Probabilistic Models in Training-free Conditional Generation
Tracing the Roots: Leveraging Temporal Dynamics in Diffusion Trajectories for Origin Attribution