Visuo Motor Control
Visuomotor control studies how visual information guides motor actions, aiming to enable robots and other systems to perform complex tasks based on visual input. Current research heavily focuses on improving the efficiency and robustness of visuomotor control through advanced deep learning architectures like transformers and diffusion models, often incorporating self-supervised learning and techniques like behavior cloning and imitation learning from both expert demonstrations and large unlabeled datasets. These advancements are crucial for creating more adaptable and reliable robots capable of operating in diverse and unpredictable environments, with significant implications for robotics, assistive technologies, and autonomous systems.
Papers
On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline
Nicklas Hansen, Zhecheng Yuan, Yanjie Ze, Tongzhou Mu, Aravind Rajeswaran, Hao Su, Huazhe Xu, Xiaolong Wang
MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations
Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran