Visual Localization
Visual localization aims to determine a camera's position and orientation within a known environment using only visual information, eliminating the need for GPS or other external sensors. Current research focuses on improving accuracy and efficiency through various approaches, including leveraging deep learning models like neural radiance fields (NeRFs) for scene representation and pose estimation, and employing techniques such as image retrieval, keypoint matching, and fusion of global and local descriptors. These advancements are crucial for applications like autonomous navigation (especially in GPS-denied environments), augmented reality, and robotics, offering robust and scalable solutions for precise localization in diverse settings.
Papers
Hierarchical Visual Localization Based on Sparse Feature Pyramid for Adaptive Reduction of Keypoint Map Size
Andrei Potapov, Mikhail Kurenkov, Pavel Karpyshev, Evgeny Yudin, Alena Savinykh, Evgeny Kruzhkov, Dzmitry Tsetserukou
Privacy-Preserving Representations are not Enough -- Recovering Scene Content from Camera Poses
Kunal Chelani, Torsten Sattler, Fredrik Kahl, Zuzana Kukelova