Visual Inertial
Visual-inertial odometry (VIO) combines data from cameras and inertial measurement units (IMUs) to estimate the position and orientation of a moving platform, aiming for robust and accurate pose estimation in various environments. Current research emphasizes improving VIO's efficiency and robustness through techniques like model compression for lightweight deployment, leveraging transformer architectures for improved data fusion and pose estimation, and employing continuous-time estimation methods for enhanced accuracy. These advancements are crucial for applications ranging from autonomous robots and drones to augmented reality and medical imaging, enabling more reliable and precise navigation and motion tracking in challenging conditions.
Papers
Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose Regression and Odometry-aided Absolute Pose Regression
Felix Ott, Nisha Lakshmana Raichur, David Rügamer, Tobias Feigl, Heiko Neumann, Bernd Bischl, Christopher Mutschler
Visual-Inertial SLAM with Tightly-Coupled Dropout-Tolerant GPS Fusion
Simon Boche, Xingxing Zuo, Simon Schaefer, Stefan Leutenegger