Visual Odometry
Visual odometry (VO) is the process of estimating a camera's movement by analyzing a sequence of images, aiming to reconstruct the camera's trajectory. Current research focuses on improving VO accuracy and robustness, particularly in challenging conditions like low light, rain, or feature-sparse environments, through techniques such as deep learning (e.g., employing convolutional neural networks, transformers, and reinforcement learning), sensor fusion (integrating LiDAR, IMU, and GPS data), and novel feature extraction methods (e.g., using event cameras or coded optics). Advances in VO are crucial for various applications, including autonomous navigation in robotics, augmented reality, and 3D mapping, particularly in scenarios where GPS is unavailable or unreliable.
Papers
A Small Form Factor Aerial Research Vehicle for Pick-and-Place Tasks with Onboard Real-Time Object Detection and Visual Odometry
Cora A. Dimmig, Anna Goodridge, Gabriel Baraban, Pupei Zhu, Joyraj Bhowmick, Marin Kobilarov
Stereo Visual Odometry with Deep Learning-Based Point and Line Feature Matching using an Attention Graph Neural Network
Shenbagaraj Kannapiran, Nalin Bendapudi, Ming-Yuan Yu, Devarth Parikh, Spring Berman, Ankit Vora, Gaurav Pandey