Autonomous Navigation
Autonomous navigation research aims to enable robots and vehicles to navigate complex environments without human intervention, focusing on safe and efficient path planning and execution. Current efforts concentrate on improving perception through sensor fusion (e.g., LiDAR, cameras, sonar) and leveraging machine learning techniques, particularly deep reinforcement learning and neural networks, for decision-making and control, often incorporating prior maps or learned models of environment dynamics. This field is crucial for advancing robotics, autonomous driving, and space exploration, with applications ranging from warehouse logistics and agricultural automation to underwater exploration and planetary landing.
Papers
Non-linear Model Predictive Control for Multi-task GPS-free Autonomous Navigation in Vineyards
Matteo Sperti, Marco Ambrosio, Mauro Martini, Alessandro Navone, Andrea Ostuni, Marcello Chiaberge
GPS-free Autonomous Navigation in Cluttered Tree Rows with Deep Semantic Segmentation
Alessandro Navone, Mauro Martini, Marco Ambrosio, Andrea Ostuni, Simone Angarano, Marcello Chiaberge
MeSA-DRL: Memory-Enhanced Deep Reinforcement Learning for Advanced Socially Aware Robot Navigation in Crowded Environments
Mannan Saeed Muhammad, Estrella Montero
Bridging the Gap: Regularized Reinforcement Learning for Improved Classical Motion Planning with Safety Modules
Elias Goldsztejn, Ronen I. Brafman
From Two-Dimensional to Three-Dimensional Environment with Q-Learning: Modeling Autonomous Navigation with Reinforcement Learning and no Libraries
Ergon Cugler de Moraes Silva