Paper ID: 2303.12704
AptSim2Real: Approximately-Paired Sim-to-Real Image Translation
Charles Y Zhang, Ashish Shrivastava
Advancements in graphics technology has increased the use of simulated data for training machine learning models. However, the simulated data often differs from real-world data, creating a distribution gap that can decrease the efficacy of models trained on simulation data in real-world applications. To mitigate this gap, sim-to-real domain transfer modifies simulated images to better match real-world data, enabling the effective use of simulation data in model training. Sim-to-real transfer utilizes image translation methods, which are divided into two main categories: paired and unpaired image-to-image translation. Paired image translation requires a perfect pixel match, making it difficult to apply in practice due to the lack of pixel-wise correspondence between simulation and real-world data. Unpaired image translation, while more suitable for sim-to-real transfer, is still challenging to learn for complex natural scenes. To address these challenges, we propose a third category: approximately-paired sim-to-real translation, where the source and target images do not need to be exactly paired. Our approximately-paired method, AptSim2Real, exploits the fact that simulators can generate scenes loosely resembling real-world scenes in terms of lighting, environment, and composition. Our novel training strategy results in significant qualitative and quantitative improvements, with up to a 24% improvement in FID score compared to the state-of-the-art unpaired image-translation methods.
Submitted: Mar 9, 2023