Image Generation
Image generation research focuses on creating realistic and diverse images from various inputs, such as text, sketches, or other images, aiming for greater control and efficiency. Current efforts center on refining diffusion and autoregressive models, exploring techniques like dynamic computation, disentangled feature representation, and multimodal integration to improve image quality, controllability, and computational efficiency. These advancements have significant implications for accessible communication, creative content production, and various computer vision tasks, offering powerful tools for both scientific investigation and practical applications. Ongoing work addresses challenges like handling multiple conditions, improving evaluation metrics, and mitigating biases and limitations in existing models.
Papers
OmniGen: Unified Image Generation
Shitao Xiao, Yueze Wang, Junjie Zhou, Huaying Yuan, Xingrun Xing, Ruiran Yan, Chaofan Li, Shuting Wang, Tiejun Huang, Zheng Liu
MM2Latent: Text-to-facial image generation and editing in GANs with multimodal assistance
Debin Meng, Christos Tzelepis, Ioannis Patras, Georgios Tzimiropoulos
RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images
Benzhi Wang, Jingkai Zhou, Jingqi Bai, Yang Yang, Weihua Chen, Fan Wang, Zhen Lei
RoomDiffusion: A Specialized Diffusion Model in the Interior Design Industry
Zhaowei Wang, Ying Hao, Hao Wei, Qing Xiao, Lulu Chen, Yulong Li, Yue Yang, Tianyi Li