Embodied AI
Embodied AI focuses on creating artificial agents that can perceive, interact with, and reason about the physical world, mirroring human capabilities. Current research emphasizes developing agents that can perform complex tasks involving navigation, manipulation, and interaction with dynamic environments, often utilizing large language models (LLMs) integrated with reinforcement learning (RL) and transformer-based architectures to improve planning, memory, and adaptability. This field is significant for advancing artificial general intelligence and has practical implications for robotics, autonomous systems, and human-computer interaction, particularly in areas like assistive technologies and healthcare.
Papers
IGOR: Image-GOal Representations are the Atomic Control Units for Foundation Models in Embodied AI
Xiaoyu Chen, Junliang Guo, Tianyu He, Chuheng Zhang, Pushi Zhang, Derek Cathera Yang, Li Zhao, Jiang Bian
BestMan: A Modular Mobile Manipulator Platform for Embodied AI with Unified Simulation-Hardware APIs
Kui Yang, Nieqing Cao, Yan Ding, Chao Chen