Image to Image Translation
Image-to-image translation aims to transform images from one domain to another, preserving essential content while altering style or modality. Current research focuses on improving the quality and efficiency of this translation using various architectures, including Generative Adversarial Networks (GANs), diffusion models, and methods leveraging contrastive learning and optimal transport. These advancements are driving progress in diverse applications, such as medical image analysis, robotics, and the creation of synthetic datasets for training AI models, by enabling the generation of realistic and consistent translated images. Furthermore, efforts are underway to enhance the controllability and explainability of these translation processes.
Papers
Neural Style Transfer and Unpaired Image-to-Image Translation to deal with the Domain Shift Problem on Spheroid Segmentation
Manuel García-Domínguez, César Domínguez, Jónathan Heras, Eloy Mata, Vico Pascual
Improving Unsupervised Stain-To-Stain Translation using Self-Supervision and Meta-Learning
Nassim Bouteldja, Barbara Mara Klinkhammer, Tarek Schlaich, Peter Boor, Dorit Merhof
Panoptic-aware Image-to-Image Translation
Liyun Zhang, Photchara Ratsamee, Bowen Wang, Zhaojie Luo, Yuki Uranishi, Manabu Higashida, Haruo Takemura
Semantic Map Injected GAN Training for Image-to-Image Translation
Balaram Singh Kshatriya, Shiv Ram Dubey, Himangshu Sarma, Kunal Chaudhary, Meva Ram Gurjar, Rahul Rai, Sunny Manchanda