Depth Map
Depth maps, representing the distance of each pixel in an image from the camera, are crucial for numerous applications in computer vision and robotics, aiming to reconstruct accurate 3D scene geometry from 2D images or other sensor data. Current research focuses on improving the accuracy and efficiency of depth map generation, particularly through advancements in monocular depth estimation (using a single camera) and multi-view approaches (combining information from multiple cameras), often employing deep learning models like transformers and diffusion models. These improvements have significant implications for applications such as autonomous driving, augmented and virtual reality, and medical imaging, enabling more robust and realistic 3D scene understanding.
Papers
LiRCDepth: Lightweight Radar-Camera Depth Estimation via Knowledge Distillation and Uncertainty Guidance
Huawei Sun, Nastassia Vysotskaya, Tobias Sukianto, Hao Feng, Julius Ott, Xiangyuan Peng, Lorenzo Servadei, Robert Wille
EGSRAL: An Enhanced 3D Gaussian Splatting based Renderer with Automated Labeling for Large-Scale Driving Scene
Yixiong Huo, Guangfeng Jiang, Hongyang Wei, Ji Liu, Song Zhang, Han Liu, Xingliang Huang, Mingjie Lu, Jinzhang Peng, Dong Li, Lu Tian, Emad Barsoum