Representation Space
Representation space, in machine learning, refers to the multi-dimensional space where data is encoded as vectors, aiming to capture meaningful relationships and facilitate downstream tasks. Current research focuses on understanding the geometric properties of these spaces, including the organization of data points (e.g., clustering, orthogonality), the impact of regularization techniques, and the development of methods to manipulate or align representations across different modalities or models (e.g., contrastive learning, normalizing flows, orthogonal transformations). This research is crucial for improving model interpretability, robustness, and generalization capabilities, with applications ranging from image classification and retrieval to knowledge graph embedding and recommendation systems.
Papers
The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities
Zhaofeng Wu, Xinyan Velocity Yu, Dani Yogatama, Jiasen Lu, Yoon Kim
Normalized Space Alignment: A Versatile Metric for Representation Analysis
Danish Ebadulla, Aditya Gulati, Ambuj Singh
Bridging the Gap: Representation Spaces in Neuro-Symbolic AI
Xin Zhang, Victor S.Sheng
SOE: SO(3)-Equivariant 3D MRI Encoding
Shizhe He, Magdalini Paschali, Jiahong Ouyang, Adnan Masood, Akshay Chaudhari, Ehsan Adeli
Guarantees for Nonlinear Representation Learning: Non-identical Covariates, Dependent Data, Fewer Samples
Thomas T. Zhang, Bruce D. Lee, Ingvar Ziemann, George J. Pappas, Nikolai Matni
Tell Me What You Don't Know: Enhancing Refusal Capabilities of Role-Playing Agents via Representation Space Analysis and Editing
Wenhao Liu, Siyu An, Junru Lu, Muling Wu, Tianlong Li, Xiaohua Wang, Xiaoqing Zheng, Di Yin, Xing Sun, Xuanjing Huang
Spacewalker: Traversing Representation Spaces for Fast Interactive Exploration and Annotation of Unstructured Data
Lukas Heine, Fabian Hörst, Jana Fragemann, Gijs Luijten, Miriam Balzer, Jan Egger, Fin Bahnsen, M. Saquib Sarfraz, Jens Kleesiek, Constantin Seibold