Depth Estimation
Depth estimation, the process of determining the distance of objects from a camera, aims to reconstruct 3D scenes from visual data, crucial for applications like autonomous driving and robotics. Current research emphasizes improving accuracy and robustness, particularly in challenging scenarios like endoscopy and low-light conditions, often employing self-supervised learning techniques and novel neural network architectures such as transformers and diffusion models alongside traditional stereo vision methods. These advancements are driving progress in various fields, including medical imaging, autonomous navigation, and 3D scene reconstruction, by enabling more accurate and reliable perception of the environment.
Papers
$DPF^*$: improved Depth Potential Function for scale-invariant sulcal depth estimation
Maxime Dieudonné (1), Guillaume Auzias (1), Julien Lefèvre (1) ((1) Institut de Neurosciences de la Timone, UMR 7289, CNRS, Aix-Marseille Université, 13005, Marseille, France)
A Systematic Literature Review on Deep Learning-based Depth Estimation in Computer Vision
Ali Rohan, Md Junayed Hasan, Andrei Petrovski
LAA-Net: A Physical-prior-knowledge Based Network for Robust Nighttime Depth Estimation
Kebin Peng, Haotang Li, Zhenyu Qi, Huashan Chen, Zi Wang, Wei Zhang, Sen He
MT3DNet: Multi-Task learning Network for 3D Surgical Scene Reconstruction
Mithun Parab, Pranay Lendave, Jiyoung Kim, Thi Quynh Dan Nguyen, Palash Ingle