CycleGAN Model
CycleGAN is a generative adversarial network (GAN) designed for unpaired image-to-image translation, aiming to learn mappings between image domains without requiring corresponding image pairs in the training data. Current research focuses on improving CycleGAN's performance and applicability through modifications to its cycle consistency loss, incorporating additional guidance from downstream tasks (e.g., segmentation), and developing lightweight architectures for faster training and reduced computational demands. This technology finds significant use in diverse fields, including medical imaging (e.g., synthesizing CT scans from MRIs, enhancing ultrasound images), remote sensing, and speech processing, offering solutions for data augmentation, image enhancement, and cross-modal data harmonization.
Papers
Standardized CycleGAN training for unsupervised stain adaptation in invasive carcinoma classification for breast histopathology
Nicolas Nerrienet, Rémy Peyret, Marie Sockeel, Stéphane Sockeel
PaCaNet: A Study on CycleGAN with Transfer Learning for Diversifying Fused Chinese Painting and Calligraphy
Zuhao Yang, Huajun Bai, Zhang Luo, Yang Xu, Wei Pang, Yue Wang, Yisheng Yuan, Yingfang Yuan