Visual Navigation Task
Visual navigation research focuses on enabling agents, such as robots, to navigate environments using visual input, aiming to achieve efficient and robust goal-directed movement. Current research emphasizes developing models that handle diverse goal specifications (e.g., images, language, coordinates), addressing challenges like partial observability, and incorporating prosocial behaviors for safe human-robot interaction. These advancements leverage deep reinforcement learning, often incorporating attention mechanisms and novel reward shaping techniques, and are driving progress in robotics, autonomous driving, and embodied AI.
Papers
November 15, 2024
October 13, 2024
July 3, 2024
April 9, 2024
April 4, 2024
April 1, 2024
March 22, 2024
February 22, 2024
February 19, 2024
December 5, 2023
October 15, 2023
June 20, 2023
April 24, 2023
March 14, 2023
October 30, 2022
July 16, 2022
June 20, 2022
June 15, 2022
February 22, 2022
December 9, 2021