Embodied AI
Embodied AI focuses on creating artificial agents that can perceive, interact with, and reason about the physical world, mirroring human capabilities. Current research emphasizes developing agents that can perform complex tasks involving navigation, manipulation, and interaction with dynamic environments, often utilizing large language models (LLMs) integrated with reinforcement learning (RL) and transformer-based architectures to improve planning, memory, and adaptability. This field is significant for advancing artificial general intelligence and has practical implications for robotics, autonomous systems, and human-computer interaction, particularly in areas like assistive technologies and healthcare.
Papers
SafeEmbodAI: a Safety Framework for Mobile Robots in Embodied AI Systems
Wenxiao Zhang, Xiangrui Kong, Thomas Braunl, Jin B. Hong
PR2: A Physics- and Photo-realistic Testbed for Embodied AI and Humanoid Robots
Hangxin Liu, Qi Xie, Zeyu Zhang, Tao Yuan, Xiaokun Leng, Lining Sun, Song-Chun Zhu, Jingwen Zhang, Zhicheng He, Yao Su
FLAME: Learning to Navigate with Multimodal LLM in Urban Environments
Yunzhe Xu, Yiyuan Pan, Zhe Liu, Hesheng Wang
All Robots in One: A New Standard and Unified Dataset for Versatile, General-Purpose Embodied Agents
Zhiqiang Wang, Hao Zheng, Yunshuang Nie, Wenjun Xu, Qingwei Wang, Hua Ye, Zhe Li, Kaidong Zhang, Xuewen Cheng, Wanxi Dong, Chang Cai, Liang Lin, Feng Zheng, Xiaodan Liang