Environment Exploration
Environment exploration in robotics and AI focuses on enabling agents to efficiently and effectively learn about and navigate unknown environments, optimizing for factors like map creation, sensor fusion, and efficient decision-making. Current research emphasizes leveraging deep learning models, such as neural networks and transformers, for tasks like map prediction, sensor calibration (e.g., LiDAR-camera), and skill acquisition, often incorporating techniques like reinforcement learning and information gain calculations to guide exploration strategies. These advancements have implications for various fields, including autonomous navigation, game design, and personalized healthcare, by improving the robustness and adaptability of AI agents in complex and dynamic settings.
Papers
WiSER-X: Wireless Signals-based Efficient Decentralized Multi-Robot Exploration without Explicit Information Exchange
Ninad Jadhav, Meghna Behari, Robert J. Wood, Stephanie Gil
Attribution for Enhanced Explanation with Transferable Adversarial eXploration
Zhiyu Zhu, Jiayu Zhang, Zhibo Jin, Huaming Chen, Jianlong Zhou, Fang Chen
Apollo: An Exploration of Video Understanding in Large Multimodal Models
Orr Zohar, Xiaohan Wang, Yann Dubois, Nikhil Mehta, Tong Xiao, Philippe Hansen-Estruch, Licheng Yu, Xiaofang Wang, Felix Juefei-Xu, Ning Zhang, Serena Yeung-Levy, Xide Xia
Virtualization & Microservice Architecture for Software-Defined Vehicles: An Evaluation and Exploration
Long Wen, Markus Rickert, Fengjunjie Pan, Jianjie Lin, Yu Zhang, Tobias Betz, Alois Knoll