Visual Inertial
Visual-inertial odometry (VIO) combines data from cameras and inertial measurement units (IMUs) to estimate the position and orientation of a moving platform, aiming for robust and accurate pose estimation in various environments. Current research emphasizes improving VIO's efficiency and robustness through techniques like model compression for lightweight deployment, leveraging transformer architectures for improved data fusion and pose estimation, and employing continuous-time estimation methods for enhanced accuracy. These advancements are crucial for applications ranging from autonomous robots and drones to augmented reality and medical imaging, enabling more reliable and precise navigation and motion tracking in challenging conditions.
Papers
InCrowd-VI: A Realistic Visual-Inertial Dataset for Evaluating SLAM in Indoor Pedestrian-Rich Spaces for Human Navigation
Marziyeh Bamdad, Hans-Peter Hutter, Alireza Darvishy
Dehazing-aided Multi-Rate Multi-Modal Pose Estimation Framework for Mitigating Visual Disturbances in Extreme Underwater Domain
Vidya Sudevan, Fakhreddine Zayer, Taimur Hassan, Sajid Javed, Hamad Karki, Giulia De Masi, Jorge Dias