Depth Estimation
Depth estimation, the process of determining the distance of objects from a camera, aims to reconstruct 3D scenes from visual data, crucial for applications like autonomous driving and robotics. Current research emphasizes improving accuracy and robustness, particularly in challenging scenarios like endoscopy and low-light conditions, often employing self-supervised learning techniques and novel neural network architectures such as transformers and diffusion models alongside traditional stereo vision methods. These advancements are driving progress in various fields, including medical imaging, autonomous navigation, and 3D scene reconstruction, by enabling more accurate and reliable perception of the environment.
Papers
DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features
Letian Wang, Seung Wook Kim, Jiawei Yang, Cunjun Yu, Boris Ivanovic, Steven L. Waslander, Yue Wang, Sanja Fidler, Marco Pavone, Peter Karkus
MEDeA: Multi-view Efficient Depth Adjustment
Mikhail Artemyev, Anna Vorontsova, Anna Sokolova, Alexander Limonov
All-day Depth Completion
Vadim Ezhov, Hyoungseob Park, Zhaoyang Zhang, Rishi Upadhyay, Howard Zhang, Chethan Chinder Chandrappa, Achuta Kadambi, Yunhao Ba, Julie Dorsey, Alex Wong
SDL-MVS: View Space and Depth Deformable Learning Paradigm for Multi-View Stereo Reconstruction in Remote Sensing
Yong-Qiang Mao, Hanbo Bi, Liangyu Xu, Kaiqiang Chen, Zhirui Wang, Xian Sun, Kun Fu
DCPI-Depth: Explicitly Infusing Dense Correspondence Prior to Unsupervised Monocular Depth Estimation
Mengtan Zhang, Yi Feng, Qijun Chen, Rui Fan
Estimating Depth of Monocular Panoramic Image with Teacher-Student Model Fusing Equirectangular and Spherical Representations
Jingguo Liu, Yijun Xu, Shigang Li, Jianfeng Li