Image Generation
Image generation research focuses on creating realistic and diverse images from various inputs, such as text, sketches, or other images, aiming for greater control and efficiency. Current efforts center on refining diffusion and autoregressive models, exploring techniques like dynamic computation, disentangled feature representation, and multimodal integration to improve image quality, controllability, and computational efficiency. These advancements have significant implications for accessible communication, creative content production, and various computer vision tasks, offering powerful tools for both scientific investigation and practical applications. Ongoing work addresses challenges like handling multiple conditions, improving evaluation metrics, and mitigating biases and limitations in existing models.
Papers
KITTEN: A Knowledge-Intensive Evaluation of Image Generation on Visual Entities
Hsin-Ping Huang, Xinyi Wang, Yonatan Bitton, Hagai Taitelbaum, Gaurav Singh Tomar, Ming-Wei Chang, Xuhui Jia, Kelvin C.K. Chan, Hexiang Hu, Yu-Chuan Su, Ming-Hsuan Yang
A Simple Approach to Unifying Diffusion-based Conditional Generation
Xirui Li, Charles Herrmann, Kelvin C.K. Chan, Yinxiao Li, Deqing Sun, Chao Ma, Ming-Hsuan Yang
HART: Efficient Visual Generation with Hybrid Autoregressive Transformer
Haotian Tang, Yecheng Wu, Shang Yang, Enze Xie, Junsong Chen, Junyu Chen, Zhuoyang Zhang, Han Cai, Yao Lu, Song Han
Customize Your Visual Autoregressive Recipe with Set Autoregressive Modeling
Wenze Liu, Le Zhuo, Yi Xin, Sheng Xia, Peng Gao, Xiangyu Yue
Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis
Jinbin Bai, Tian Ye, Wei Chow, Enxin Song, Qing-Guo Chen, Xiangtai Li, Zhen Dong, Lei Zhu, Shuicheng Yan
DART: Denoising Autoregressive Transformer for Scalable Text-to-Image Generation
Jiatao Gu, Yuyang Wang, Yizhe Zhang, Qihang Zhang, Dinghuai Zhang, Navdeep Jaitly, Josh Susskind, Shuangfei Zhai
Relational Diffusion Distillation for Efficient Image Generation
Weilun Feng, Chuanguang Yang, Zhulin An, Libo Huang, Boyu Diao, Fei Wang, Yongjun Xu
Uncovering Regional Defaults from Photorealistic Forests in Text-to-Image Generation with DALL-E 2
Zilong Liu, Krzysztof Janowicz, Kitty Currier, Meilin Shi
ControlAR: Controllable Image Generation with Autoregressive Models
Zongming Li, Tianheng Cheng, Shoufa Chen, Peize Sun, Haocheng Shen, Longjin Ran, Xiaoxin Chen, Wenyu Liu, Xinggang Wang