Unpaired Image Translation

Unpaired image translation aims to learn mappings between image domains without relying on paired examples, a crucial capability for applications where obtaining paired data is difficult or impossible. Current research focuses on improving the quality and semantic consistency of translated images using various generative models, including Generative Adversarial Networks (GANs) and diffusion models, often incorporating techniques like optimal transport and contrastive learning to enhance performance. These advancements are significantly impacting diverse fields, enabling data augmentation for medical image analysis (e.g., generating synthetic surgical images or harmonizing CT scans from different vendors), mitigating domain shift in scientific data analysis (e.g., LArTPC detector responses), and facilitating the creation of training datasets for computer vision tasks.

Papers