Paper ID: 2407.14643
Double-Layer Soft Data Fusion for Indoor Robot WiFi-Visual Localization
Yuehua Ding, Jean-Francois Dollinger, Vincent Vauchey, Mourad Zghal
This paper presents a novel WiFi-Visual data fusion method for indoor robot (TIAGO++) localization. This method can use 10 WiFi samples and 4 low-resolution images ($58 \times 58$ in pixels) to localize a indoor robot with an average error distance about 1.32 meters. The experiment test is 3 months after the data collection in a general teaching building, whose WiFi and visual environments are partially changed. This indirectly shows the robustness of the proposed method. Instead of neural network design, this paper focuses on the soft data fusion to prevent unbounded errors in visual localization. A double-layer soft data fusion is proposed. The proposed soft data fusion includes the first-layer WiFi-Visual feature fusion and the second-layer decision vector fusion. Firstly, motivated by the excellent capability of neural network in image processing and recognition, the temporal-spatial features are extracted from WiFi data, these features are represented in image form. Secondly, the WiFi temporal-spatial features in image form and the visual features taken by the robot camera are combined together, and are jointly exploited by a classification neural network to produce a likelihood vector for WiFi-Visual localization. This is called first-layer WiFi-Visual fusion. Similarly, these two types of features can exploited separately by neural networks to produce another two independent likelihood vectors. Thirdly, the three likelihood vectors are fused by Hadamard product and median filtering to produce the final likelihood vector for localization. This called the second-layer decision vector fusion. The proposed soft data fusion does not apply any threshold or prioritize any data source over the other in the fusion process. It never excludes the positions of low probabilities, which can avoid the information loss due to a hard decision. The demo video is provided. The code will be open.
Submitted: Jul 19, 2024