Paper ID: 2203.04812

A high-precision self-supervised monocular visual odometry in foggy weather based on robust cycled generative adversarial networks and multi-task learning aided depth estimation

Xiuyuan Li, Jiangang Yu, Fengchao Li, Guowen An

This paper proposes a high-precision self-supervised monocular VO, which is specifically designed for navigation in foggy weather. A cycled generative adversarial network is designed to obtain high-quality self-supervised loss via forcing the forward and backward half-cycle to output consistent estimation. Moreover, gradient-based loss and perceptual loss are introduced to eliminate the interference of complex photometric change on self-supervised loss in foggy weather. To solve the ill-posed problem of depth estimation, a self-supervised multi-task learning aided depth estimation module is designed based on the strong correlation between the depth estimation and transmission map calculation of hazy images in foggy weather. The experimental results on the synthetic foggy KITTI dataset show that the proposed self-supervised monocular VO performs better in depth and pose estimation than other state-of-the-art monocular VO in the literature, indicating the designed method is more suitable for foggy weather.

Submitted: Mar 9, 2022