Neural SLAM
Neural SLAM (Simultaneous Localization and Mapping) uses deep learning to build 3D models of environments while simultaneously tracking a camera's position within them. Current research focuses on improving the accuracy and scalability of these models, often employing hybrid representations that combine neural implicit functions (like NeRFs) with efficient data structures like hash grids or tri-planes to handle both high-frequency details and large-scale scenes. These advancements address limitations of traditional SLAM methods, particularly in dynamic environments and large-scale mapping, leading to more robust and accurate 3D reconstruction for applications like robotics, augmented reality, and autonomous navigation.
Papers
March 26, 2024
February 29, 2024
January 2, 2024
December 15, 2023
April 27, 2023
March 23, 2023