Representation Space
Representation space, in machine learning, refers to the multi-dimensional space where data is encoded as vectors, aiming to capture meaningful relationships and facilitate downstream tasks. Current research focuses on understanding the geometric properties of these spaces, including the organization of data points (e.g., clustering, orthogonality), the impact of regularization techniques, and the development of methods to manipulate or align representations across different modalities or models (e.g., contrastive learning, normalizing flows, orthogonal transformations). This research is crucial for improving model interpretability, robustness, and generalization capabilities, with applications ranging from image classification and retrieval to knowledge graph embedding and recommendation systems.
Papers
Non-Linear Inference Time Intervention: Improving LLM Truthfulness
Jakub Hoscilowicz, Adam Wiacek, Jan Chojnacki, Adam Cieslak, Leszek Michon, Vitalii Urbanevych, Artur Janicki
OrCo: Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning
Noor Ahmed, Anna Kukleva, Bernt Schiele