Meaningful Representation
Meaningful representation in machine learning aims to create data encodings that are both informative and computationally efficient, facilitating better model interpretability, controllability, and transferability across tasks and domains. Current research emphasizes disentangled representations, often achieved using techniques like variational autoencoders and contrastive learning, and focuses on developing robust metrics to evaluate the quality of these representations, particularly in complex data like multimodal healthcare data and 3D structures. These advancements are crucial for improving the performance and reliability of AI systems across diverse applications, from medical diagnosis to materials science and natural language processing.
Papers
Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts in Underspecified Visual Tasks
Luca Scimeca, Alexander Rubinstein, Armand Mihai Nicolicioiu, Damien Teney, Yoshua Bengio
A simple connection from loss flatness to compressed representations in neural networks
Shirui Chen, Stefano Recanatesi, Eric Shea-Brown
An Investigation of Representation and Allocation Harms in Contrastive Learning
Subha Maity, Mayank Agarwal, Mikhail Yurochkin, Yuekai Sun
Algebras of actions in an agent's representations of the world
Alexander Dean, Eduardo Alonso, Esther Mondragon
From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication
Irene Cannistraci, Luca Moschella, Marco Fumero, Valentino Maiorca, Emanuele Rodolà