Meaningful Representation
Meaningful representation in machine learning aims to create data encodings that are both informative and computationally efficient, facilitating better model interpretability, controllability, and transferability across tasks and domains. Current research emphasizes disentangled representations, often achieved using techniques like variational autoencoders and contrastive learning, and focuses on developing robust metrics to evaluate the quality of these representations, particularly in complex data like multimodal healthcare data and 3D structures. These advancements are crucial for improving the performance and reliability of AI systems across diverse applications, from medical diagnosis to materials science and natural language processing.
Papers
Resolving Domain Shift For Representations Of Speech In Non-Invasive Brain Recordings
Jeremiah Ridge, Oiwi Parker Jones
Decoding Diffusion: A Scalable Framework for Unsupervised Analysis of Latent Space Biases and Representations Using Natural Language Prompts
E. Zhixuan Zeng, Yuhao Chen, Alexander Wong
Do Discrete Self-Supervised Representations of Speech Capture Tone Distinctions?
Opeyemi Osakuade, Simon King
Simultaneous Dimensionality Reduction for Extracting Useful Representations of Large Empirical Multimodal Datasets
Eslam Abdelaleem
Beyond position: how rotary embeddings shape representations and memory in autoregressive transfomers
Valeria Ruscio, Fabrizio Silvestri
Towards Active Participant-Centric Vertical Federated Learning: Some Representations May Be All You Need
Jon Irureta, Jon Imaz, Aizea Lojo, Marco González, Iñigo Perona
Identifying Sub-networks in Neural Networks via Functionally Similar Representations
Tian Gao, Amit Dhurandhar, Karthikeyan Natesan Ramamurthy, Dennis Wei
ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos
Tao Tang, Hong Liu, Yingxuan You, Ti Wang, Wenhao Li
Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Neale Ratzlaff, Matthew Lyle Olson, Musashi Hinck, Shao-Yen Tseng, Vasudev Lal, Phillip Howard
Inductive Gradient Adjustment For Spectral Bias In Implicit Neural Representations
Kexuan Shi, Hai Chen, Leheng Zhang, Shuhang Gu