Visual Grounding
Visual grounding is the task of connecting natural language descriptions to corresponding regions within an image or 3D scene. Current research focuses on improving the accuracy and efficiency of visual grounding models, often employing transformer-based architectures and leveraging large multimodal language models (MLLMs) for enhanced feature fusion and reasoning capabilities. This field is crucial for advancing embodied AI, enabling robots and other agents to understand and interact with the world through natural language, and has significant implications for applications such as robotic manipulation, visual question answering, and medical image analysis.
Papers
January 30, 2024
January 29, 2024
January 15, 2024
January 9, 2024
January 3, 2024
December 29, 2023
December 26, 2023
December 23, 2023
December 22, 2023
December 19, 2023
December 15, 2023
December 13, 2023
December 8, 2023
December 7, 2023
December 6, 2023
December 5, 2023
December 4, 2023
December 3, 2023