Visual Policy
Visual policy research focuses on enabling robots to learn and execute tasks using visual input, aiming to bridge the gap between simulated and real-world environments and improve robustness to visual variations. Current research emphasizes leveraging techniques like transformers, point cloud processing, and knowledge distillation to enhance policy learning from diverse data sources, including human demonstrations and multi-camera views. These advancements are crucial for creating more adaptable and reliable robots capable of performing complex manipulation tasks in unstructured environments, impacting fields like robotics, automation, and human-robot collaboration.
Papers
July 23, 2024
April 29, 2024
April 24, 2024
March 13, 2023
November 13, 2022
November 11, 2021