Visuomotor Policy
Visuomotor policy learning aims to enable robots to translate visual input directly into actions, achieving complex manipulation tasks. Current research focuses on improving the robustness and generalization of these policies across diverse environments and robot embodiments, often employing diffusion models, transformers, and trajectory optimization methods to handle high-dimensional data and multimodal action spaces. This field is crucial for advancing robotics, enabling more adaptable and efficient robots capable of performing a wider range of tasks in unstructured settings, with significant implications for manufacturing, healthcare, and other domains.
Papers
May 13, 2024
May 8, 2024
April 4, 2024
March 15, 2024
March 12, 2024
January 30, 2024
January 17, 2024
December 28, 2023
November 23, 2023
October 15, 2023
October 12, 2023
July 28, 2023
July 12, 2023
March 7, 2023
October 31, 2022
October 18, 2022
December 15, 2021