Depth Estimation
Depth estimation, the process of determining the distance of objects from a camera, aims to reconstruct 3D scenes from visual data, crucial for applications like autonomous driving and robotics. Current research emphasizes improving accuracy and robustness, particularly in challenging scenarios like endoscopy and low-light conditions, often employing self-supervised learning techniques and novel neural network architectures such as transformers and diffusion models alongside traditional stereo vision methods. These advancements are driving progress in various fields, including medical imaging, autonomous navigation, and 3D scene reconstruction, by enabling more accurate and reliable perception of the environment.
Papers
Optical Lens Attack on Deep Learning Based Monocular Depth Estimation
Ce Zhou (1), Qiben Yan (1), Daniel Kent (1), Guangjing Wang (1), Ziqi Zhang (2), Hayder Radha (1) ((1) Michigan State University, (2) Peking University)
3DDX: Bone Surface Reconstruction from a Single Standard-Geometry Radiograph via Dual-Face Depth Estimation
Yi Gu, Yoshito Otake, Keisuke Uemura, Masaki Takao, Mazen Soufi, Seiji Okada, Nobuhiko Sugano, Hugues Talbot, Yoshinobu Sato
FisheyeDepth: A Real Scale Self-Supervised Depth Estimation Model for Fisheye Camera
Guoyang Zhao, Yuxuan Liu, Weiqing Qi, Fulong Ma, Ming Liu, Jun Ma
Generalizing monocular colonoscopy image depth estimation by uncertainty-based global and local fusion network
Sijia Du, Chengfeng Zhou, Suncheng Xiang, Jianwei Xu, Dahong Qian
Depth on Demand: Streaming Dense Depth from a Low Frame Rate Active Sensor
Andrea Conti, Matteo Poggi, Valerio Cambareri, Stefano Mattoccia
LED: Light Enhanced Depth Estimation at Night
Simon de Moreau, Yasser Almehio, Andrei Bursuc, Hafid El-Idrissi, Bogdan Stanciulescu, Fabien Moutarde
Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy
Bojian Li, Bo Liu, Jinghua Yue, Fugen Zhou