Image Synthesis
Image synthesis focuses on generating realistic images from various inputs, such as text descriptions, sketches, or other images, aiming to improve controllability, realism, and efficiency. Current research emphasizes advancements in diffusion models, generative adversarial networks (GANs), and autoregressive models, often incorporating techniques like latent space manipulation, multimodal conditioning (text and image), and attention mechanisms to enhance image quality and control. This field is significant for its applications in diverse areas, including medical imaging, virtual try-ons, and content creation, while also raising important considerations regarding ethical implications and environmental impact of computationally intensive models.
Papers
Benchmarking and Analyzing 3D-aware Image Synthesis with a Modularized Codebase
Qiuyu Wang, Zifan Shi, Kecheng Zheng, Yinghao Xu, Sida Peng, Yujun Shen
DreamTime: An Improved Optimization Strategy for Diffusion-Guided 3D Generation
Yukun Huang, Jianan Wang, Yukai Shi, Boshi Tang, Xianbiao Qi, Lei Zhang
TauPETGen: Text-Conditional Tau PET Image Synthesis Based on Latent Diffusion Models
Se-In Jang, Cristina Lois, Emma Thibault, J. Alex Becker, Yafei Dong, Marc D. Normandin, Julie C. Price, Keith A. Johnson, Georges El Fakhri, Kuang Gong