Paper ID: 2410.12802
Resolving Positional Ambiguity in Dialogues by Vision-Language Models for Robot Navigation
Kuan-Lin Chen, Tzu-Ti Wei, Li-Tzu Yeh, Elaine Kao, Yu-Chee Tseng, Jen-Jee Chen
We consider an autonomous navigation robot that can accept human commands through natural language to provide services in an indoor environment. These natural language commands may include time, position, object, and action components. However, we observe that the positional components within such commands usually refer to objects in the environment that may contain different levels of positional ambiguity. For example, the command "Go to the chair!" may be ambiguous when there are multiple chairs of the same type in a room. In order to disambiguate these commands, we employ a large language model and a large vision-language model to conduct multiple turns of conversations with the user. We propose a two-level approach that utilizes a vision-language model to map the meanings in natural language to a unique object ID in images and then performs another mapping from the unique object ID to a 3D depth map, thereby allowing the robot to navigate from its current position to the target position. To the best of our knowledge, this is the first work linking foundation models to the positional ambiguity issue.
Submitted: Sep 30, 2024