Indoor Navigation
Indoor navigation research aims to enable autonomous agents, such as robots and drones, and assistive technologies to reliably navigate indoor environments, often lacking GPS signals. Current efforts focus on developing robust algorithms, including deep reinforcement learning, vision-language models, and sensor fusion techniques (e.g., combining LiDAR, cameras, and inertial measurement units), to overcome challenges like dynamic obstacles and imprecise localization. These advancements hold significant promise for improving accessibility for visually impaired individuals, enhancing robotic autonomy in various applications (e.g., delivery, search and rescue), and optimizing human movement in crowded indoor spaces.
Papers
The Invisible Map: Visual-Inertial SLAM with Fiducial Markers for Smartphone-based Indoor Navigation
Paul Ruvolo, Ayush Chakraborty, Rucha Dave, Richard Li, Duncan Mazza, Xierui Shen, Raiyan Siddique, Krishna Suresh
Autonomous Mapping and Navigation using Fiducial Markers and Pan-Tilt Camera for Assisting Indoor Mobility of Blind and Visually Impaired People
Dharmateja Adapa, Virendra Singh Shekhawat, Avinash Gautam, Sudeept Mohan