Image Synthesis
Image synthesis focuses on generating realistic images from various inputs, such as text descriptions, sketches, or other images, aiming to improve controllability, realism, and efficiency. Current research emphasizes advancements in diffusion models, generative adversarial networks (GANs), and autoregressive models, often incorporating techniques like latent space manipulation, multimodal conditioning (text and image), and attention mechanisms to enhance image quality and control. This field is significant for its applications in diverse areas, including medical imaging, virtual try-ons, and content creation, while also raising important considerations regarding ethical implications and environmental impact of computationally intensive models.
Papers
Adaptively-Realistic Image Generation from Stroke and Sketch with Diffusion Model
Shin-I Cheng, Yu-Jie Chen, Wei-Chen Chiu, Hung-Yu Tseng, Hsin-Ying Lee
Deformation equivariant cross-modality image synthesis with paired non-aligned training data
Joel Honkamaa, Umair Khan, Sonja Koivukoski, Mira Valkonen, Leena Latonen, Pekka Ruusuvuori, Pekka Marttinen
Novel Deep Learning Approach to Derive Cytokeratin Expression and Epithelium Segmentation from DAPI
Felix Jakob Segerer, Katharina Nekolla, Lorenz Rognoni, Ansh Kapil, Markus Schick, Helen Angell, Günter Schmidt
SGM-Net: Semantic Guided Matting Net
Qing Song, Wenfeng Sun, Donghan Yang, Mengjie Hu, Chun Liu
NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis
Chenfei Wu, Jian Liang, Xiaowei Hu, Zhe Gan, Jianfeng Wang, Lijuan Wang, Zicheng Liu, Yuejian Fang, Nan Duan
BigColor: Colorization using a Generative Color Prior for Natural Images
Geonung Kim, Kyoungkook Kang, Seongtae Kim, Hwayoon Lee, Sehoon Kim, Jonghyun Kim, Seung-Hwan Baek, Sunghyun Cho