Paper ID: 2109.04569

Open-World Distributed Robot Self-Localization with Transferable Visual Vocabulary and Both Absolute and Relative Features

Mitsuki Yoshida, Ryogo Yamamoto, Daiki Iwata, Kanji Tanaka

Visual robot self-localization is a fundamental problem in visual robot navigation and has been studied across various problem settings, including monocular and sequential localization. However, many existing studies focus primarily on single-robot scenarios, with limited exploration into general settings involving diverse robots connected through wireless networks with constrained communication capacities, such as open-world distributed robot systems. In particular, issues related to the transfer and sharing of key knowledge, such as visual descriptions and visual vocabulary, between robots have been largely neglected. This work introduces a new self-localization framework designed for open-world distributed robot systems that maintains state-of-the-art performance while offering two key advantages: (1) it employs an unsupervised visual vocabulary model that maps to multimodal, lightweight, and transferable visual features, and (2) the visual vocabulary itself is a lightweight and communication-friendly model. Although the primary focus is on encoding monocular view images, the framework can be easily extended to sequential localization applications. By utilizing complementary similarity-preserving features -- both absolute and relative -- the framework meets the requirements for being unsupervised, multimodal, lightweight, and transferable. All features are learned and recognized using a lightweight graph neural network and scene graph. The effectiveness of the proposed method is validated in both passive and active self-localization scenarios.

Submitted: Sep 9, 2021