Monocular Depth Estimation
Monocular depth estimation aims to reconstruct three-dimensional scene depth from a single image, a challenging inverse problem due to the inherent loss of depth information during image formation. Current research focuses on improving accuracy and robustness, particularly in challenging scenarios like low-texture regions, viewpoint changes, and non-Lambertian surfaces, often employing deep learning models such as transformers and diffusion networks, along with techniques like multi-view rendering and radar fusion. These advancements have significant implications for various applications, including autonomous driving, robotics, and augmented reality, by enabling more accurate and reliable 3D scene understanding from readily available monocular vision data.
Papers
Self-supervised Monocular Depth Estimation with Large Kernel Attention
Xuezhi Xiang, Yao Wang, Lei Zhang, Denis Ombati, Himaloy Himu, Xiantong Zhen
A New Dataset for Monocular Depth Estimation Under Viewpoint Shifts
Aurel Pjetri (1 and 2), Stefano Caprasecca (1), Leonardo Taccari (1), Matteo Simoncini (1), Henrique Piñeiro Monteagudo (1 and 3), Walter Wallace (1), Douglas Coimbra de Andrade (4), Francesco Sambo (1), Andrew David Bagdanov (1) ((1) Verizon Connect Research, Florence, Italy, (2) Department of Information Engineering, University of Florence, Florence, Italy, (3) University of Bologna, Bologna, Italy, (4) SENAI Institute of Innovation, Rio de Janeiro, Brazil)
Optical Lens Attack on Deep Learning Based Monocular Depth Estimation
Ce Zhou (1), Qiben Yan (1), Daniel Kent (1), Guangjing Wang (1), Ziqi Zhang (2), Hayder Radha (1) ((1) Michigan State University, (2) Peking University)
Parameter-efficient Bayesian Neural Networks for Uncertainty-aware Depth Estimation
Richard D. Paul, Alessio Quercia, Vincent Fortuin, Katharina Nöh, Hanno Scharr