Dense 3D
Dense 3D reconstruction aims to create detailed, three-dimensional models of scenes from various input data, such as images, LiDAR scans, and event camera streams. Current research focuses on improving the efficiency and accuracy of these reconstructions, employing techniques like neural implicit representations (NeRFs), 3D Gaussian splatting, and voxel-based methods, often integrated with simultaneous localization and mapping (SLAM) for dynamic environments. These advancements are crucial for applications ranging from autonomous driving and robotics to augmented reality and cultural heritage preservation, enabling more accurate and robust scene understanding in real-time.
Papers
EVI-SAM: Robust, Real-time, Tightly-coupled Event-Visual-Inertial State Estimation and 3D Dense Mapping
Weipeng Guan, Peiyu Chen, Huibin Zhao, Yu Wang, Peng Lu
Regulating Intermediate 3D Features for Vision-Centric Autonomous Driving
Junkai Xu, Liang Peng, Haoran Cheng, Linxuan Xia, Qi Zhou, Dan Deng, Wei Qian, Wenxiao Wang, Deng Cai