Diffusion Explainer
Diffusion explainers are generative models that leverage the principles of diffusion processes to create new data samples, primarily images and other high-dimensional data, by reversing a noise-addition process. Current research focuses on improving efficiency (e.g., one-step diffusion), enhancing controllability (e.g., through classifier-free guidance and conditioning on various modalities like text and 3D priors), and addressing challenges like data replication and mode collapse. These advancements are impacting diverse fields, from image super-resolution and medical imaging to robotics, recommendation systems, and even scientific simulations, by providing powerful tools for data generation, manipulation, and analysis.
Papers
RoomPainter: View-Integrated Diffusion for Consistent Indoor Scene Texturing
Zhipeng Huang, Wangbo Yu, Xinhua Cheng, ChengShu Zhao, Yunyang Ge, Mingyi Guo, Li Yuan, Yonghong Tian
GANFusion: Feed-Forward Text-to-3D with Diffusion in GAN Space
Souhaib Attaiki, Paul Guerrero, Duygu Ceylan, Niloy J. Mitra, Maks Ovsjanikov
Tiled Diffusion
Or Madar, Ohad Fried
Uni-Renderer: Unifying Rendering and Inverse Rendering Via Dual Stream Diffusion
Zhifei Chen, Tianshuo Xu, Wenhang Ge, Leyi Wu, Dongyu Yan, Jing He, Luozhou Wang, Lu Zeng, Shunsi Zhang, Yingcong Chen
Consistent Human Image and Video Generation with Spatially Conditioned Diffusion
Mingdeng Cao, Chong Mou, Ziyang Yuan, Xintao Wang, Zhaoyang Zhang, Ying Shan, Yinqiang Zheng
Non-Normal Diffusion Models
Henry Li
Score-Optimal Diffusion Schedules
Christopher Williams, Andrew Campbell, Arnaud Doucet, Saifuddin Syed
ACDiT: Interpolating Autoregressive Conditional Modeling and Diffusion Transformer
Jinyi Hu, Shengding Hu, Yuxuan Song, Yufei Huang, Mingxuan Wang, Hao Zhou, Zhiyuan Liu, Wei-Ying Ma, Maosong Sun