Meaningful Representation
Meaningful representation in machine learning aims to create data encodings that are both informative and computationally efficient, facilitating better model interpretability, controllability, and transferability across tasks and domains. Current research emphasizes disentangled representations, often achieved using techniques like variational autoencoders and contrastive learning, and focuses on developing robust metrics to evaluate the quality of these representations, particularly in complex data like multimodal healthcare data and 3D structures. These advancements are crucial for improving the performance and reliability of AI systems across diverse applications, from medical diagnosis to materials science and natural language processing.
Papers
Learning Disentangled Representations for Perceptual Point Cloud Quality Assessment via Mutual Information Minimization
Ziyu Shan, Yujie Zhang, Yipeng Liu, Yiling Xu
Large Language Models as Neurolinguistic Subjects: Identifying Internal Representations for Form and Meaning
Linyang He, Ercong Nie, Helmut Schmid, Hinrich Schütze, Nima Mesgarani, Jonathan Brennan
Resolving Domain Shift For Representations Of Speech In Non-Invasive Brain Recordings
Jeremiah Ridge, Oiwi Parker Jones
Decoding Diffusion: A Scalable Framework for Unsupervised Analysis of Latent Space Biases and Representations Using Natural Language Prompts
E. Zhixuan Zeng, Yuhao Chen, Alexander Wong
Do Discrete Self-Supervised Representations of Speech Capture Tone Distinctions?
Opeyemi Osakuade, Simon King
Simultaneous Dimensionality Reduction for Extracting Useful Representations of Large Empirical Multimodal Datasets
Eslam Abdelaleem
Beyond position: how rotary embeddings shape representations and memory in autoregressive transfomers
Valeria Ruscio, Fabrizio Silvestri
Towards Active Participant-Centric Vertical Federated Learning: Some Representations May Be All You Need
Jon Irureta, Jon Imaz, Aizea Lojo, Marco González, Iñigo Perona