Multimodal Reinforcement Learning
Multimodal reinforcement learning (MRL) aims to train agents that effectively utilize diverse data sources (e.g., visual, auditory, tactile, textual) to make optimal decisions in complex environments. Current research focuses on improving data efficiency and robustness through techniques like self-supervised representation learning, multimodal alignment, and the development of novel policy architectures such as Gaussian mixture models to handle discontinuous optimal policies. These advancements are driving progress in various applications, including robotic control (locomotion, manipulation, surgery), human-robot interaction, and autonomous driving, by enabling agents to learn more effectively from richer, real-world data.
Papers
September 29, 2024
May 1, 2024
March 19, 2024
March 3, 2024
January 30, 2024
July 20, 2023
March 13, 2023
February 18, 2023
February 10, 2023
October 14, 2022