Image Synthesis
Image synthesis focuses on generating realistic images from various inputs, such as text descriptions, sketches, or other images, aiming to improve controllability, realism, and efficiency. Current research emphasizes advancements in diffusion models, generative adversarial networks (GANs), and autoregressive models, often incorporating techniques like latent space manipulation, multimodal conditioning (text and image), and attention mechanisms to enhance image quality and control. This field is significant for its applications in diverse areas, including medical imaging, virtual try-ons, and content creation, while also raising important considerations regarding ethical implications and environmental impact of computationally intensive models.
Papers
WarpDiffusion: Efficient Diffusion Model for High-Fidelity Virtual Try-on
xujie zhang, Xiu Li, Michael Kampffmeyer, Xin Dong, Zhenyu Xie, Feida Zhu, Haoye Dong, Xiaodan Liang
Cache Me if You Can: Accelerating Diffusion Models through Block Caching
Felix Wimbauer, Bichen Wu, Edgar Schoenfeld, Xiaoliang Dai, Ji Hou, Zijian He, Artsiom Sanakoyeu, Peizhao Zhang, Sam Tsai, Jonas Kohler, Christian Rupprecht, Daniel Cremers, Peter Vajda, Jialiang Wang
Detailed Human-Centric Text Description-Driven Large Scene Synthesis
Gwanghyun Kim, Dong Un Kang, Hoigi Seo, Hayeon Kim, Se Young Chun
Layered Rendering Diffusion Model for Zero-Shot Guided Image Synthesis
Zipeng Qi, Guoxi Huang, Zebin Huang, Qin Guo, Jinwen Chen, Junyu Han, Jian Wang, Gang Zhang, Lufei Liu, Errui Ding, Jingdong Wang