Latent Geometry
Latent geometry research focuses on uncovering and leveraging the underlying geometric structure of data representations in machine learning models. Current efforts concentrate on developing methods to learn and utilize non-Euclidean geometries, such as hyperbolic and Riemannian spaces, within latent spaces of various models including diffusion models, GANs, and transformers, often employing techniques like manifold learning, contrastive learning, and normalizing flows. This work aims to improve model performance, interpretability, and generalization by aligning the latent space geometry with the intrinsic structure of the data, impacting diverse fields like computer vision, natural language processing, and molecular design. The ultimate goal is to create more efficient and effective machine learning models by exploiting the inherent geometric properties of the data.
Papers
Neural Latent Geometry Search: Product Manifold Inference via Gromov-Hausdorff-Informed Bayesian Optimization
Haitz Saez de Ocariz Borde, Alvaro Arroyo, Ismael Morales, Ingmar Posner, Xiaowen Dong
Gromov-Hausdorff Distances for Comparing Product Manifolds of Model Spaces
Haitz Saez de Ocariz Borde, Alvaro Arroyo, Ismael Morales, Ingmar Posner, Xiaowen Dong