Semantic Space
Semantic space represents the organization of meaning in a multi-dimensional vector space, aiming to capture relationships between concepts and facilitate tasks like information retrieval, language understanding, and reasoning. Current research focuses on developing robust methods for creating and manipulating these spaces, including neural symbolic models, variational autoencoders, and contrastive learning approaches, often incorporating techniques like disentanglement and hubness reduction to improve representation quality. This work has significant implications for advancing artificial intelligence, particularly in areas like natural language processing, computer vision, and robotics, by enabling more nuanced and effective processing of complex information.
Papers
A Semantic Space is Worth 256 Language Descriptions: Make Stronger Segmentation Models with Descriptive Properties
Junfei Xiao, Ziqi Zhou, Wenxuan Li, Shiyi Lan, Jieru Mei, Zhiding Yu, Alan Yuille, Yuyin Zhou, Cihang Xie
Compositional Zero-Shot Learning for Attribute-Based Object Reference in Human-Robot Interaction
Peng Gao, Ahmed Jaafar, Brian Reily, Christopher Reardon, Hao Zhang