Environment Exploration
Environment exploration in robotics and AI focuses on enabling agents to efficiently and effectively learn about and navigate unknown environments, optimizing for factors like map creation, sensor fusion, and efficient decision-making. Current research emphasizes leveraging deep learning models, such as neural networks and transformers, for tasks like map prediction, sensor calibration (e.g., LiDAR-camera), and skill acquisition, often incorporating techniques like reinforcement learning and information gain calculations to guide exploration strategies. These advancements have implications for various fields, including autonomous navigation, game design, and personalized healthcare, by improving the robustness and adaptability of AI agents in complex and dynamic settings.
Papers
Risk-aware Meta-level Decision Making for Exploration Under Uncertainty
Joshua Ott, Sung-Kyun Kim, Amanda Bouman, Oriana Peltzer, Mamoru Sobue, Harrison Delecki, Mykel J. Kochenderfer, Joel Burdick, Ali-akbar Agha-mohammadi
Exploration and Coverage with Swarms of Settling Agents
Ori Rappel, Joseph Ben-Asher, Alfred Bruckstein
Deterministic Sequencing of Exploration and Exploitation for Reinforcement Learning
Piyush Gupta, Vaibhav Srivastava