Diffusion Explainer
Diffusion explainers are generative models that leverage the principles of diffusion processes to create new data samples, primarily images and other high-dimensional data, by reversing a noise-addition process. Current research focuses on improving efficiency (e.g., one-step diffusion), enhancing controllability (e.g., through classifier-free guidance and conditioning on various modalities like text and 3D priors), and addressing challenges like data replication and mode collapse. These advancements are impacting diverse fields, from image super-resolution and medical imaging to robotics, recommendation systems, and even scientific simulations, by providing powerful tools for data generation, manipulation, and analysis.
Papers
DiffuPT: Class Imbalance Mitigation for Glaucoma Detection via Diffusion Based Generation and Model Pretraining
Youssof Nawar, Nouran Soliman, Moustafa Wassel, Mohamed ElHabebe, Noha Adly, Marwan Torki, Ahmed Elmassry, Islam Ahmed
MaterialPicker: Multi-Modal Material Generation with Diffusion Transformers
Xiaohe Ma, Valentin Deschaintre, Miloš Hašan, Fujun Luan, Kun Zhou, Hongzhi Wu, Yiwei Hu
SpotLight: Shadow-Guided Object Relighting via Diffusion
Frédéric Fortier-Chouinard, Zitian Zhang, Louis-Etienne Messier, Mathieu Garon, Anand Bhattad, Jean-François Lalonde
Learning the Evolution of Physical Structure of Galaxies via Diffusion Models
Andrew Lizarraga, Eric Hanchen Jiang, Jacob Nowack, Yun Qi Li, Ying Nian Wu, Bernie Boscoe, Tuan Do
HoliSDiP: Image Super-Resolution via Holistic Semantics and Diffusion Prior
Li-Yuan Tsao, Hao-Wei Chen, Hao-Wei Chung, Deqing Sun, Chun-Yi Lee, Kelvin C.K. Chan, Ming-Hsuan Yang
TSD-SR: One-Step Diffusion with Target Score Distillation for Real-World Image Super-Resolution
Linwei Dong, Qingnan Fan, Yihong Guo, Zhonghao Wang, Qi Zhang, Jinwei Chen, Yawei Luo, Changqing Zou
SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation
Duc-Hai Pham, Tung Do, Phong Nguyen, Binh-Son Hua, Khoi Nguyen, Rang Nguyen
DiffSLT: Enhancing Diversity in Sign Language Translation via Diffusion Model
JiHwan Moon, Jihoon Park, Jungeun Kim, Jongseong Bae, Hyeongwoo Jeon, Ha Young Kim
PassionSR: Post-Training Quantization with Adaptive Scale in One-Step Diffusion based Image Super-Resolution
Libo Zhu, Jianze Li, Haotong Qin, Yulun Zhang, Yong Guo, Xiaokang Yang
One Diffusion to Generate Them All
Duong H. Le, Tuan Pham, Sangho Lee, Christopher Clark, Aniruddha Kembhavi, Stephan Mandt, Ranjay Krishna, Jiasen Lu
SMGDiff: Soccer Motion Generation using diffusion probabilistic models
Hongdi Yang, Chengyang Li, Zhenxuan Wu, Gaozheng Li, Jingya Wang, Jingyi Yu, Zhuo Su, Lan Xu
From Diffusion to Resolution: Leveraging 2D Diffusion Models for 3D Super-Resolution Task
Bohao Chen, Yanchao Zhang, Yanan Lv, Hua Han, Xi Chen
Material Anything: Generating Materials for Any 3D Object via Diffusion
Xin Huang, Tengfei Wang, Ziwei Liu, Qing Wang
Efficient Pruning of Text-to-Image Models: Insights from Pruning Stable Diffusion
Samarth N Ramesh, Zhixue Zhao
FastGrasp: Efficient Grasp Synthesis with Diffusion
Xiaofei Wu, Tao Liu, Caoji Li, Yuexin Ma, Yujiao Shi, Xuming He