Paper ID: 2204.01027
Distortion-Aware Self-Supervised 360{\deg} Depth Estimation from A Single Equirectangular Projection Image
Yuya Hasegawa, Ikehata Satoshi, Kiyoharu Aizawa
360{\deg} images are widely available over the last few years. This paper proposes a new technique for single 360{\deg} image depth prediction under open environments. Depth prediction from a 360{\deg} single image is not easy for two reasons. One is the limitation of supervision datasets - the currently available dataset is limited to indoor scenes. The other is the problems caused by Equirectangular Projection Format (ERP), commonly used for 360{\deg} images, that are coordinate and distortion. There is only one method existing that uses cube map projection to produce six perspective images and apply self-supervised learning using motion pictures for perspective depth prediction to deal with these problems. Different from the existing method, we directly use the ERP format. We propose a framework of direct use of ERP with coordinate conversion of correspondences and distortion-aware upsampling module to deal with the ERP related problems and extend a self-supervised learning method for open environments. For the experiments, we firstly built a dataset for the evaluation, and quantitatively evaluate the depth prediction in outdoor scenes. We show that it outperforms the state-of-the-art technique
Submitted: Apr 3, 2022