Meaningful Representation
Meaningful representation in machine learning aims to create data encodings that are both informative and computationally efficient, facilitating better model interpretability, controllability, and transferability across tasks and domains. Current research emphasizes disentangled representations, often achieved using techniques like variational autoencoders and contrastive learning, and focuses on developing robust metrics to evaluate the quality of these representations, particularly in complex data like multimodal healthcare data and 3D structures. These advancements are crucial for improving the performance and reliability of AI systems across diverse applications, from medical diagnosis to materials science and natural language processing.
Papers
DGNN: Decoupled Graph Neural Networks with Structural Consistency between Attribute and Graph Embedding Representations
Jinlu Wang, Jipeng Guo, Yanfeng Sun, Junbin Gao, Shaofan Wang, Yachao Yang, Baocai Yin
Intriguing Equivalence Structures of the Embedding Space of Vision Transformers
Shaeke Salman, Md Montasir Bin Shams, Xiuwen Liu
Explaining the Implicit Neural Canvas: Connecting Pixels to Neurons by Tracing their Contributions
Namitha Padmanabhan, Matthew Gwilliam, Pulkit Kumar, Shishira R Maiya, Max Ehrlich, Abhinav Shrivastava
Explicitly Disentangled Representations in Object-Centric Learning
Riccardo Majellaro, Jonathan Collu, Aske Plaat, Thomas M. Moerland