Latent Structure
Latent structure research focuses on uncovering hidden, underlying patterns and relationships within complex data, aiming to improve model interpretability, efficiency, and generalization. Current research emphasizes developing methods to learn and leverage these latent structures using diverse approaches, including graph neural networks, diffusion models, and variational autoencoders, often applied to high-dimensional data in various domains like natural language processing, image generation, and biomedical analysis. This work has significant implications for advancing machine learning capabilities, enabling more robust and explainable AI systems, and facilitating deeper understanding of complex phenomena across scientific disciplines.
Papers
Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data
Johannes Treutlein, Dami Choi, Jan Betley, Samuel Marks, Cem Anil, Roger Grosse, Owain Evans
Emerging-properties Mapping Using Spatial Embedding Statistics: EMUSES
Chris Foulon, Marcela Ovando-Tellez, Lia Talozzi, Maurizio Corbetta, Anna Matsulevits, Michel Thiebaut de Schotten