Mobile Robot
Mobile robots are autonomous systems designed to navigate and interact with their environment, with research focusing on improving their perception, navigation, and manipulation capabilities. Current efforts concentrate on enhancing robustness through sensor fusion (e.g., combining radar and vision data), efficient motion planning guided by natural language instructions or reinforcement learning, and reliable localization using techniques like visual odometry and polygon-based mapping. These advancements are crucial for expanding the applications of mobile robots in diverse fields, including manufacturing, logistics, healthcare, and exploration, by enabling safer, more efficient, and adaptable operation in complex and dynamic settings.
Papers
Model-Free Source Seeking by a Novel Single-Integrator with Attenuating Oscillations and Better Convergence Rate: Robotic Experiments
Shivam Bajpai, Ahmed A. Elgohary, Sameh A. Eisa
Interactive Semantic Map Representation for Skill-based Visual Object Navigation
Tatiana Zemskova, Aleksei Staroverov, Kirill Muravyev, Dmitry Yudin, Aleksandr Panov
Reinforcement learning based local path planning for mobile robot
Mehmet Gok, Mehmet Tekerek, Hamza Aydemir
A Resilient Framework for 5G-Edge-Connected UAVs based on Switching Edge-MPC and Onboard-PID Control
Gerasimos Damigos, Achilleas Santi Seisa, Sumeet Gajanan Satpute, Tore Lindgren, George Nikolakopoulos
RadaRays: Real-time Simulation of Rotating FMCW Radar for Mobile Robotics via Hardware-accelerated Ray Tracing
Alexander Mock, Martin Magnusson, Joachim Hertzberg
Roadmaps with Gaps over Controllers: Achieving Efficiency in Planning under Dynamics
Aravind Sivaramakrishnan, Sumanth Tangirala, Edgar Granados, Noah R. Carver, Kostas E. Bekris