Environment Exploration
Environment exploration in robotics and AI focuses on enabling agents to efficiently and effectively learn about and navigate unknown environments, optimizing for factors like map creation, sensor fusion, and efficient decision-making. Current research emphasizes leveraging deep learning models, such as neural networks and transformers, for tasks like map prediction, sensor calibration (e.g., LiDAR-camera), and skill acquisition, often incorporating techniques like reinforcement learning and information gain calculations to guide exploration strategies. These advancements have implications for various fields, including autonomous navigation, game design, and personalized healthcare, by improving the robustness and adaptability of AI agents in complex and dynamic settings.
Papers
ELDEN: Exploration via Local Dependencies
Jiaheng Hu, Zizhao Wang, Peter Stone, Roberto Martin-Martin
Learning RL-Policies for Joint Beamforming Without Exploration: A Batch Constrained Off-Policy Approach
Heasung Kim, Sravan Kumar Ankireddy
Dealing with uncertainty: balancing exploration and exploitation in deep recurrent reinforcement learning
Valentina Zangirolami, Matteo Borrotti
NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration
Ajay Sridhar, Dhruv Shah, Catherine Glossop, Sergey Levine
Democratizing LLMs: An Exploration of Cost-Performance Trade-offs in Self-Refined Open-Source Models
Sumuk Shashidhar, Abhinav Chinta, Vaibhav Sahai, Zhenhailong Wang, Heng Ji
Spherical Rolling Robots Design, Modeling, and Control: A Systematic Literature Review
Aminata Diouf, Bruno Belzile, Maarouf Saad, David St-Onge
Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Liang Chen, Yichi Zhang, Shuhuai Ren, Haozhe Zhao, Zefan Cai, Yuchi Wang, Peiyi Wang, Tianyu Liu, Baobao Chang
A Distributed Multi-Robot Framework for Exploration, Information Acquisition and Consensus
Aalok Patwardhan, Andrew J. Davison