Contrastive Diffusion
Contrastive diffusion models leverage the power of diffusion probabilistic models for generative tasks, enhancing their performance by incorporating contrastive learning. Current research focuses on improving sample quality, particularly in out-of-distribution regions, and applying these models to diverse applications like time series forecasting, image and audio generation, and medical image reconstruction. This approach leads to more accurate and efficient generation, impacting fields ranging from healthcare (e.g., improved medical imaging) to entertainment (e.g., realistic animation). The integration of contrastive learning boosts the models' ability to capture complex relationships between inputs and outputs, resulting in higher-fidelity and more controllable results.
Papers
C-DARL: Contrastive diffusion adversarial representation learning for label-free blood vessel segmentation
Boah Kim, Yujin Oh, Bradford J. Wood, Ronald M. Summers, Jong Chul Ye
Contrastive Conditional Latent Diffusion for Audio-visual Segmentation
Yuxin Mao, Jing Zhang, Mochu Xiang, Yunqiu Lv, Yiran Zhong, Yuchao Dai