Visual Odometry
Visual odometry (VO) is the process of estimating a camera's movement by analyzing a sequence of images, aiming to reconstruct the camera's trajectory. Current research focuses on improving VO accuracy and robustness, particularly in challenging conditions like low light, rain, or feature-sparse environments, through techniques such as deep learning (e.g., employing convolutional neural networks, transformers, and reinforcement learning), sensor fusion (integrating LiDAR, IMU, and GPS data), and novel feature extraction methods (e.g., using event cameras or coded optics). Advances in VO are crucial for various applications, including autonomous navigation in robotics, augmented reality, and 3D mapping, particularly in scenarios where GPS is unavailable or unreliable.
Papers
Conformalized Multimodal Uncertainty Regression and Reasoning
Domenico Parente, Nastaran Darabi, Alex C. Stutts, Theja Tulabandhula, Amit Ranjan Trivedi
OCC-VO: Dense Mapping via 3D Occupancy-Based Visual Odometry for Autonomous Driving
Heng Li, Yifan Duan, Xinran Zhang, Haiyi Liu, Jianmin Ji, Yanyong Zhang