Embedded Space
Embedded spaces represent data points as vectors in a lower-dimensional space, aiming to capture essential relationships and facilitate various machine learning tasks like classification and data analysis. Current research focuses on developing effective methods for constructing these spaces, including optimizing loss functions and constraints, leveraging relative representations and relational graph structures, and improving interpretability through conceptualization. These advancements are crucial for improving the robustness and explainability of machine learning models, impacting diverse applications such as recommendation systems, outlier detection, and natural language processing.
Papers
August 3, 2024
March 1, 2023
January 29, 2023
August 22, 2022
March 18, 2022