Latent Diffusion Model
Latent diffusion models (LDMs) are generative AI models that create high-quality images by reversing a diffusion process in a compressed latent space, offering efficiency advantages over pixel-space methods. Current research focuses on improving controllability (e.g., through text or other modalities), enhancing efficiency (e.g., via parameter-efficient architectures or faster inference), and addressing challenges like model robustness and ethical concerns (e.g., watermarking and mitigating adversarial attacks). LDMs are significantly impacting various fields, including medical imaging (synthesis and restoration), speech enhancement, and even physics simulation, by enabling the generation of realistic and diverse data for training and analysis where real data is scarce or difficult to obtain.
Papers
Binary Noise for Binary Tasks: Masked Bernoulli Diffusion for Unsupervised Anomaly Detection
Julia Wolleb, Florentin Bieder, Paul Friedrich, Peter Zhang, Alicia Durrer, Philippe C. Cattin
DreamSampler: Unifying Diffusion Sampling and Score Distillation for Image Manipulation
Jeongsol Kim, Geon Yeong Park, Jong Chul Ye
SCP-Diff: Photo-Realistic Semantic Image Synthesis with Spatial-Categorical Joint Prior
Huan-ang Gao, Mingju Gao, Jiaju Li, Wenyi Li, Rong Zhi, Hao Tang, Hao Zhao
Explore In-Context Segmentation via Latent Diffusion Models
Chaoyang Wang, Xiangtai Li, Henghui Ding, Lu Qi, Jiangning Zhang, Yunhai Tong, Chen Change Loy, Shuicheng Yan
3DTopia: Large Text-to-3D Generation Model with Hybrid Diffusion Priors
Fangzhou Hong, Jiaxiang Tang, Ziang Cao, Min Shi, Tong Wu, Zhaoxi Chen, Shuai Yang, Tengfei Wang, Liang Pan, Dahua Lin, Ziwei Liu
OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on
Yuhao Xu, Tao Gu, Weifeng Chen, Chengcai Chen