Latent Diffusion Model
Latent diffusion models (LDMs) are generative AI models that create high-quality images by reversing a diffusion process in a compressed latent space, offering efficiency advantages over pixel-space methods. Current research focuses on improving controllability (e.g., through text or other modalities), enhancing efficiency (e.g., via parameter-efficient architectures or faster inference), and addressing challenges like model robustness and ethical concerns (e.g., watermarking and mitigating adversarial attacks). LDMs are significantly impacting various fields, including medical imaging (synthesis and restoration), speech enhancement, and even physics simulation, by enabling the generation of realistic and diverse data for training and analysis where real data is scarce or difficult to obtain.
Papers
Improving Deep Learning-based Automatic Cranial Defect Reconstruction by Heavy Data Augmentation: From Image Registration to Latent Diffusion Models
Marek Wodzinski, Kamil Kwarciak, Mateusz Daniol, Daria Hemmerling
Latent Representation Matters: Human-like Sketches in One-shot Drawing Tasks
Victor Boutin, Rishav Mukherji, Aditya Agrawal, Sabine Muzellec, Thomas Fel, Thomas Serre, Rufin VanRullen