Vision Based Autonomous System
Vision-based autonomous systems aim to enable robots and vehicles to perceive and navigate their environments using only visual input. Current research focuses on improving the robustness and efficiency of perception algorithms, including developing novel keypoint detection methods, employing deep learning models for tasks like pose estimation and control (e.g., behavior cloning and end-to-end learning), and leveraging event-based cameras for high-speed maneuvers. These advancements are crucial for applications ranging from autonomous driving and drone racing to interplanetary navigation and robotic manipulation of GUIs, promising significant improvements in safety, efficiency, and capabilities across various domains.
Papers
October 1, 2024
September 11, 2024
May 5, 2024
September 18, 2023
May 22, 2023
May 12, 2023
February 6, 2023