Paper ID: 2210.15136

3D Shape Knowledge Graph for Cross-domain 3D Shape Retrieval

Rihao Chang, Yongtao Ma, Tong Hao, Weizhi Nie

The surge in 3D modeling has led to a pronounced research emphasis on the field of 3D shape retrieval. Numerous contemporary approaches have been put forth to tackle this intricate challenge. Nevertheless, effectively addressing the intricacies of cross-modal 3D shape retrieval remains a formidable undertaking, owing to inherent modality-based disparities. This study presents an innovative notion, termed "geometric words", which functions as elemental constituents for representing entities through combinations. To establish the knowledge graph, we employ geometric words as nodes, connecting them via shape categories and geometry attributes. Subsequently, we devise a unique graph embedding method for knowledge acquisition. Finally, an effective similarity measure is introduced for retrieval purposes. Importantly, each 3D or 2D entity can anchor its geometric terms within the knowledge graph, thereby serving as a link between cross-domain data. As a result, our approach facilitates multiple cross-domain 3D shape retrieval tasks. We evaluate the proposed method's performance on the ModelNet40 and ShapeNetCore55 datasets, encompassing scenarios related to 3D shape retrieval and cross-domain retrieval. Furthermore, we employ the established cross-modal dataset (MI3DOR) to assess cross-modal 3D shape retrieval. The resulting experimental outcomes, in conjunction with comparisons against state-of-the-art techniques, clearly highlight the superiority of our approach.

Submitted: Oct 27, 2022