Depth Estimation
Depth estimation, the process of determining the distance of objects from a camera, aims to reconstruct 3D scenes from visual data, crucial for applications like autonomous driving and robotics. Current research emphasizes improving accuracy and robustness, particularly in challenging scenarios like endoscopy and low-light conditions, often employing self-supervised learning techniques and novel neural network architectures such as transformers and diffusion models alongside traditional stereo vision methods. These advancements are driving progress in various fields, including medical imaging, autonomous navigation, and 3D scene reconstruction, by enabling more accurate and reliable perception of the environment.
Papers
Diver Interest via Pointing in Three Dimensions: 3D Pointing Reconstruction for Diver-AUV Communication
Chelsey Edge, Demetrious Kutzke, Megdalia Bromhal, Junaed Sattar
FocDepthFormer: Transformer with latent LSTM for Depth Estimation from Focal Stack
Xueyang Kang, Fengze Han, Abdur R. Fayjie, Patrick Vandewalle, Kourosh Khoshelham, Dong Gong