Image Generation
Image generation research focuses on creating realistic and diverse images from various inputs, such as text, sketches, or other images, aiming for greater control and efficiency. Current efforts center on refining diffusion and autoregressive models, exploring techniques like dynamic computation, disentangled feature representation, and multimodal integration to improve image quality, controllability, and computational efficiency. These advancements have significant implications for accessible communication, creative content production, and various computer vision tasks, offering powerful tools for both scientific investigation and practical applications. Ongoing work addresses challenges like handling multiple conditions, improving evaluation metrics, and mitigating biases and limitations in existing models.
Papers
Enhancing Early Diabetic Retinopathy Detection through Synthetic DR1 Image Generation: A StyleGAN3 Approach
Sagarnil Das, Pradeep Walia
Improving Autoregressive Visual Generation with Cluster-Oriented Token Prediction
Teng Hu, Jiangning Zhang, Ran Yi, Jieyu Weng, Yabiao Wang, Xianfang Zeng, Zhucun Xue, Lizhuang Ma
UNIC-Adapter: Unified Image-instruction Adapter with Multi-modal Transformer for Image Generation
Lunhao Duan, Shanshan Zhao, Wenjun Yan, Yinglun Li, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, Mingming Gong, Gui-Song Xia
Protective Perturbations against Unauthorized Data Usage in Diffusion-based Image Generation
Sen Peng, Jijia Yang, Mingyue Wang, Jianfei He, Xiaohua Jia
DreamOmni: Unified Image Generation and Editing
Bin Xia, Yuechen Zhang, Jingyao Li, Chengyao Wang, Yitong Wang, Xinglong Wu, Bei Yu, Jiaya Jia
Human-Guided Image Generation for Expanding Small-Scale Training Image Datasets
Changjian Chen, Fei Lv, Yalong Guan, Pengcheng Wang, Shengjie Yu, Yifan Zhang, Zhuo Tang