Paper ID: 2202.08537

Domain Adaptation for Underwater Image Enhancement via Content and Style Separation

Yu-Wei Chen, Soo-Chang Pei

Underwater image suffer from color cast, low contrast and hazy effect due to light absorption, refraction and scattering, which degraded the high-level application, e.g, object detection and object tracking. Recent learning-based methods demonstrate astonishing performance on underwater image enhancement, however, most of these works use synthetic pair data for supervised learning and ignore the domain gap to real-world data. To solve this problem, we propose a domain adaptation framework for underwater image enhancement via content and style separation, different from prior works of domain adaptation for underwater image enhancement, which target to minimize the latent discrepancy of synthesis and real-world data, we aim to separate encoded feature into content and style latent and distinguish style latent from different domains, i.e. synthesis, real-world underwater and clean domain, and process domain adaptation and image enhancement in latent space. By latent manipulation, our model provide a user interact interface to adjust different enhanced level for continuous change. Experiment on various public real-world underwater benchmarks demonstrate that the proposed framework is capable to perform domain adaptation for underwater image enhancement and outperform various state-of-the-art underwater image enhancement algorithms in quantity and quality. The model and source code will be available at https://github.com/fordevoted/UIESS

Submitted: Feb 17, 2022