Diffusion Explainer
Diffusion explainers are generative models that leverage the principles of diffusion processes to create new data samples, primarily images and other high-dimensional data, by reversing a noise-addition process. Current research focuses on improving efficiency (e.g., one-step diffusion), enhancing controllability (e.g., through classifier-free guidance and conditioning on various modalities like text and 3D priors), and addressing challenges like data replication and mode collapse. These advancements are impacting diverse fields, from image super-resolution and medical imaging to robotics, recommendation systems, and even scientific simulations, by providing powerful tools for data generation, manipulation, and analysis.
287papers
Papers - Page 11
October 14, 2024
October 4, 2024
Diffusion State-Guided Projected Gradient for Inverse Problems
Rayhan Zirvi, Bahareh Tolooshams, Anima AnandkumarCLoSD: Closing the Loop between Simulation and Diffusion for multi-task character control
Guy Tevet, Sigal Raab, Setareh Cohan, Daniele Reda, Zhengyi Luo, Xue Bin Peng, Amit H. Bermano, Michiel van de Panne
October 3, 2024
GUD: Generation with Unified Diffusion
Mathis Gerdes, Max Welling, Miranda C. N. ChengDiffusion & Adversarial Schrödinger Bridges via Iterative Proportional Markovian Fitting
Sergei Kholkin, Grigoriy Ksenofontov, David Li, Nikita Kornilov, Nikita Gushchin, Alexandra Suvorikova, Alexey Kroshnin, Evgeny Burnaev+1
October 2, 2024
September 26, 2024
September 24, 2024