Embodied AI
Embodied AI focuses on creating artificial agents that can perceive, interact with, and reason about the physical world, mirroring human capabilities. Current research emphasizes developing agents that can perform complex tasks involving navigation, manipulation, and interaction with dynamic environments, often utilizing large language models (LLMs) integrated with reinforcement learning (RL) and transformer-based architectures to improve planning, memory, and adaptability. This field is significant for advancing artificial general intelligence and has practical implications for robotics, autonomous systems, and human-computer interaction, particularly in areas like assistive technologies and healthcare.
Papers
The Essential Role of Causality in Foundation World Models for Embodied AI
Tarun Gupta, Wenbo Gong, Chao Ma, Nick Pawlowski, Agrin Hilmkil, Meyer Scetbon, Marc Rigter, Ade Famoti, Ashley Juan Llorens, Jianfeng Gao, Stefan Bauer, Danica Kragic, Bernhard Schölkopf, Cheng Zhang
A call for embodied AI
Giuseppe Paolo, Jonas Gonzalez-Billandon, Balázs Kégl
Selective Visual Representations Improve Convergence and Generalization for Embodied AI
Ainaz Eftekhar, Kuo-Hao Zeng, Jiafei Duan, Ali Farhadi, Ani Kembhavi, Ranjay Krishna
Scene-Driven Multimodal Knowledge Graph Construction for Embodied AI
Song Yaoxian, Sun Penglei, Liu Haoyu, Li Zhixu, Song Wei, Xiao Yanghua, Zhou Xiaofang