Meaningful Representation
Meaningful representation in machine learning aims to create data encodings that are both informative and computationally efficient, facilitating better model interpretability, controllability, and transferability across tasks and domains. Current research emphasizes disentangled representations, often achieved using techniques like variational autoencoders and contrastive learning, and focuses on developing robust metrics to evaluate the quality of these representations, particularly in complex data like multimodal healthcare data and 3D structures. These advancements are crucial for improving the performance and reliability of AI systems across diverse applications, from medical diagnosis to materials science and natural language processing.
Papers
BlobGEN-Vid: Compositional Text-to-Video Generation with Blob Video Representations
Weixi Feng, Chao Liu, Sifei Liu, William Yang Wang, Arash Vahdat, Weili Nie
Investigating Map-Based Path Loss Models: A Study of Feature Representations in Convolutional Neural Networks
Ryan G. Dempsey, Jonathan Ethier, Halim Yanikomeroglu
ICLR: In-Context Learning of Representations
Core Francisco Park, Andrew Lee, Ekdeep Singh Lubana, Yongyi Yang, Maya Okawa, Kento Nishi, Martin Wattenberg, Hidenori Tanaka
Exploiting Aggregation and Segregation of Representations for Domain Adaptive Human Pose Estimation
Qucheng Peng, Ce Zheng, Zhengming Ding, Pu Wang, Chen Chen
Compressed Chain of Thought: Efficient Reasoning Through Dense Representations
Jeffrey Cheng, Benjamin Van Durme
Multi-View Incremental Learning with Structured Hebbian Plasticity for Enhanced Fusion Efficiency
Yuhong Chen, Ailin Song, Huifeng Yin, Shuai Zhong, Fuhai Chen, Qi Xu, Shiping Wang, Mingkun Xu