Large Depth
Large depth, in the context of computer vision and related fields, refers to the accurate estimation and utilization of depth information in images and videos, aiming to improve 3D scene understanding and reconstruction. Current research focuses on developing robust and efficient methods for depth estimation from various sensor modalities (RGB, RGB-D, LiDAR), employing architectures like transformers and convolutional neural networks, and exploring techniques such as depth fusion, completion, and inpainting. These advancements have significant implications for applications ranging from autonomous driving and robotics to augmented reality and 3D modeling, enabling more accurate and reliable perception and interaction with the 3D world.
Papers
Automated Road Extraction from Satellite Imagery Integrating Dense Depthwise Dilated Separable Spatial Pyramid Pooling with DeepLabV3+
Arpan Mahara, Md Rezaul Karim Khan, Naphtali D. Rishe, Wenjia Wang, Seyed Masoud Sadjadi
Simultaneously Solving FBSDEs with Neural Operators of Logarithmic Depth, Constant Width, and Sub-Linear Rank
Takashi Furuya, Anastasis Kratsios
MoDification: Mixture of Depths Made Easy
Chen Zhang, Meizhi Zhong, Qimeng Wang, Xuantao Lu, Zheyu Ye, Chengqiang Lu, Yan Gao, Yao Hu, Kehai Chen, Min Zhang, Dawei Song
DepthSplat: Connecting Gaussian Splatting and Depth
Haofei Xu, Songyou Peng, Fangjinhua Wang, Hermann Blum, Daniel Barath, Andreas Geiger, Marc Pollefeys
Router-Tuning: A Simple and Effective Approach for Enabling Dynamic-Depth in Transformers
Shwai He, Tao Ge, Guoheng Sun, Bowei Tian, Xiaoyang Wang, Ang Li, Dong Yu
Depth on Demand: Streaming Dense Depth from a Low Frame Rate Active Sensor
Andrea Conti, Matteo Poggi, Valerio Cambareri, Stefano Mattoccia
Depth Matters: Exploring Deep Interactions of RGB-D for Semantic Segmentation in Traffic Scenes
Siyu Chen, Ting Han, Changshe Zhang, Weiquan Liu, Jinhe Su, Zongyue Wang, Guorong Cai