Real World Environment
Real-world environment research focuses on enabling robots and AI agents to effectively perceive, navigate, and interact within complex, dynamic, and unpredictable settings. Current research emphasizes robust perception using multimodal sensor fusion (e.g., LiDAR, cameras, tactile sensors) and advanced model architectures like transformers and neural radiance fields to create accurate 3D representations and predict traversability. This work is crucial for advancing robotics, autonomous driving, and human-computer interaction, bridging the significant "sim-to-real" gap and leading to more reliable and adaptable AI systems in diverse real-world applications.
Papers
Robot Navigation Using Physically Grounded Vision-Language Models in Outdoor Environments
Mohamed Elnoor, Kasun Weerakoon, Gershom Seneviratne, Ruiqi Xian, Tianrui Guan, Mohamed Khalid M Jaffar, Vignesh Rajagopal, Dinesh Manocha
WildFusion: Multimodal Implicit 3D Reconstructions in the Wild
Yanbaihui Liu, Boyuan Chen