Meaningful Representation
Meaningful representation in machine learning aims to create data encodings that are both informative and computationally efficient, facilitating better model interpretability, controllability, and transferability across tasks and domains. Current research emphasizes disentangled representations, often achieved using techniques like variational autoencoders and contrastive learning, and focuses on developing robust metrics to evaluate the quality of these representations, particularly in complex data like multimodal healthcare data and 3D structures. These advancements are crucial for improving the performance and reliability of AI systems across diverse applications, from medical diagnosis to materials science and natural language processing.
Papers
iQRL -- Implicitly Quantized Representations for Sample-efficient Reinforcement Learning
Aidan Scannell, Kalle Kujanpää, Yi Zhao, Mohammadreza Nakhaei, Arno Solin, Joni Pajarinen
Representations as Language: An Information-Theoretic Framework for Interpretability
Henry Conklin, Kenny Smith
Analyzing the Benefits of Prototypes for Semi-Supervised Category Learning
Liyi Zhang, Logan Nelson, Thomas L. Griffiths
Survival of the Fittest Representation: A Case Study with Modular Addition
Xiaoman Delores Ding, Zifan Carl Guo, Eric J. Michaud, Ziming Liu, Max Tegmark
How Does Perfect Fitting Affect Representation Learning? On the Training Dynamics of Representations in Deep Neural Networks
Yuval Sharon, Yehuda Dar
F-3DGS: Factorized Coordinates and Representations for 3D Gaussian Splatting
Xiangyu Sun, Joo Chan Lee, Daniel Rho, Jong Hwan Ko, Usman Ali, Eunbyung Park