Real World Environment
Real-world environment research focuses on enabling robots and AI agents to effectively perceive, navigate, and interact within complex, dynamic, and unpredictable settings. Current research emphasizes robust perception using multimodal sensor fusion (e.g., LiDAR, cameras, tactile sensors) and advanced model architectures like transformers and neural radiance fields to create accurate 3D representations and predict traversability. This work is crucial for advancing robotics, autonomous driving, and human-computer interaction, bridging the significant "sim-to-real" gap and leading to more reliable and adaptable AI systems in diverse real-world applications.
Papers
A Dual-Arm Collaborative Framework for Dexterous Manipulation in Unstructured Environments with Contrastive Planning
Shengzeng Huo, Fangyuan Wang, Luyin Hu, Peng Zhou, Jihong Zhu, Hesheng Wang, David Navarro-Alarcon
GrASPE: Graph based Multimodal Fusion for Robot Navigation in Unstructured Outdoor Environments
Kasun Weerakoon, Adarsh Jagan Sathyamoorthy, Jing Liang, Tianrui Guan, Utsav Patel, Dinesh Manocha