Embodied AI
Embodied AI focuses on creating artificial agents that can perceive, interact with, and reason about the physical world, mirroring human capabilities. Current research emphasizes developing agents that can perform complex tasks involving navigation, manipulation, and interaction with dynamic environments, often utilizing large language models (LLMs) integrated with reinforcement learning (RL) and transformer-based architectures to improve planning, memory, and adaptability. This field is significant for advancing artificial general intelligence and has practical implications for robotics, autonomous systems, and human-computer interaction, particularly in areas like assistive technologies and healthcare.
Papers
A Contextual Bandit Approach for Learning to Plan in Environments with Probabilistic Goal Configurations
Sohan Rudra, Saksham Goel, Anirban Santara, Claudio Gentile, Laurent Perron, Fei Xia, Vikas Sindhwani, Carolina Parada, Gaurav Aggarwal
Instance-Specific Image Goal Navigation: Training Embodied Agents to Find Object Instances
Jacob Krantz, Stefan Lee, Jitendra Malik, Dhruv Batra, Devendra Singh Chaplot