Visual Odometry
Visual odometry (VO) is the process of estimating a camera's movement by analyzing a sequence of images, aiming to reconstruct the camera's trajectory. Current research focuses on improving VO accuracy and robustness, particularly in challenging conditions like low light, rain, or feature-sparse environments, through techniques such as deep learning (e.g., employing convolutional neural networks, transformers, and reinforcement learning), sensor fusion (integrating LiDAR, IMU, and GPS data), and novel feature extraction methods (e.g., using event cameras or coded optics). Advances in VO are crucial for various applications, including autonomous navigation in robotics, augmented reality, and 3D mapping, particularly in scenarios where GPS is unavailable or unreliable.
Papers
Multi-Sensor Fusion for Quadruped Robot State Estimation using Invariant Filtering and Smoothing
Ylenia Nisticò, Hajun Kim, João Carlos Virgolino Soares, Geoff Fink, Hae-Won Park, Claudio SeminiIstituto Italiano di Tecnologia (IIT)●Korean Advanced Institute of Science and Technology (KAIST)●Thompson Rivers UniversityLPVIMO-SAM: Tightly-coupled LiDAR/Polarization Vision/Inertial/Magnetometer/Optical Flow Odometry via Smoothing and Mapping
Derui Shan, Peng Guo, Wenshuo Li, Du TaoNorth China University of Technology●Beihang University