Visual Localization
Visual localization aims to determine a camera's position and orientation within a known environment using only visual information, eliminating the need for GPS or other external sensors. Current research focuses on improving accuracy and efficiency through various approaches, including leveraging deep learning models like neural radiance fields (NeRFs) for scene representation and pose estimation, and employing techniques such as image retrieval, keypoint matching, and fusion of global and local descriptors. These advancements are crucial for applications like autonomous navigation (especially in GPS-denied environments), augmented reality, and robotics, offering robust and scalable solutions for precise localization in diverse settings.
Papers
Around the World in 80 Timesteps: A Generative Approach to Global Visual Geolocation
Nicolas Dufour, David Picard, Vicky Kalogeiton, Loic LandrieuEnhancing Scene Coordinate Regression with Efficient Keypoint Detection and Sequential Information
Kuan Xu, Zeyu Jiang, Haozhi Cao, Shenghai Yuan, Chen Wang, Lihua Xie