Latent Geometry
Latent geometry research focuses on uncovering and leveraging the underlying geometric structure of data representations in machine learning models. Current efforts concentrate on developing methods to learn and utilize non-Euclidean geometries, such as hyperbolic and Riemannian spaces, within latent spaces of various models including diffusion models, GANs, and transformers, often employing techniques like manifold learning, contrastive learning, and normalizing flows. This work aims to improve model performance, interpretability, and generalization by aligning the latent space geometry with the intrinsic structure of the data, impacting diverse fields like computer vision, natural language processing, and molecular design. The ultimate goal is to create more efficient and effective machine learning models by exploiting the inherent geometric properties of the data.