Depth Estimation
Depth estimation, the process of determining the distance of objects from a camera, aims to reconstruct 3D scenes from visual data, crucial for applications like autonomous driving and robotics. Current research emphasizes improving accuracy and robustness, particularly in challenging scenarios like endoscopy and low-light conditions, often employing self-supervised learning techniques and novel neural network architectures such as transformers and diffusion models alongside traditional stereo vision methods. These advancements are driving progress in various fields, including medical imaging, autonomous navigation, and 3D scene reconstruction, by enabling more accurate and reliable perception of the environment.
Papers
The RoboDepth Challenge: Methods and Advancements Towards Robust Depth Estimation
Lingdong Kong, Yaru Niu, Shaoyuan Xie, Hanjiang Hu, Lai Xing Ng, Benoit R. Cottereau, Liangjun Zhang, Hesheng Wang, Wei Tsang Ooi, Ruijie Zhu, Ziyang Song, Li Liu, Tianzhu Zhang, Jun Yu, Mohan Jing, Pengwei Li, Xiaohua Qi, Cheng Jin, Yingfeng Chen, Jie Hou, Jie Zhang, Zhen Kan, Qiang Ling, Liang Peng, Minglei Li, Di Xu, Changpeng Yang, Yuanqi Yao, Gang Wu, Jian Kuai, Xianming Liu, Junjun Jiang, Jiamian Huang, Baojun Li, Jiale Chen, Shuang Zhang, Sun Ao, Zhenyu Li, Runze Chen, Haiyong Luo, Fang Zhao, Jingze Yu
Learning Depth Estimation for Transparent and Mirror Surfaces
Alex Costanzino, Pierluigi Zama Ramirez, Matteo Poggi, Fabio Tosi, Stefano Mattoccia, Luigi Di Stefano