Meaningful Representation
Meaningful representation in machine learning aims to create data encodings that are both informative and computationally efficient, facilitating better model interpretability, controllability, and transferability across tasks and domains. Current research emphasizes disentangled representations, often achieved using techniques like variational autoencoders and contrastive learning, and focuses on developing robust metrics to evaluate the quality of these representations, particularly in complex data like multimodal healthcare data and 3D structures. These advancements are crucial for improving the performance and reliability of AI systems across diverse applications, from medical diagnosis to materials science and natural language processing.
Papers
Efficient Compression of Sparse Accelerator Data Using Implicit Neural Representations and Importance Sampling
Xihaier Luo, Samuel Lurvey, Yi Huang, Yihui Ren, Jin Huang, Byung-Jun Yoon
SUICA: Learning Super-high Dimensional Sparse Implicit Neural Representations for Spatial Transcriptomics
Qingtian Zhu, Yumin Zheng, Yuling Sang, Yifan Zhan, Ziyan Zhu, Jun Ding, Yinqiang Zheng
Towards Lensless Image Deblurring with Prior-Embedded Implicit Neural Representations in the Low-Data Regime
Abeer Banerjee, Sanjay Singh
Fusion of Discrete Representations and Self-Augmented Representations for Multilingual Automatic Speech Recognition
Shih-heng Wang, Jiatong Shi, Chien-yu Huang, Shinji Watanabe, Hung-yi Lee
Learning Disentangled Representations for Perceptual Point Cloud Quality Assessment via Mutual Information Minimization
Ziyu Shan, Yujie Zhang, Yipeng Liu, Yiling Xu
Large Language Models as Neurolinguistic Subjects: Identifying Internal Representations for Form and Meaning
Linyang He, Ercong Nie, Helmut Schmid, Hinrich Schütze, Nima Mesgarani, Jonathan Brennan