Meaningful Representation
Meaningful representation in machine learning aims to create data encodings that are both informative and computationally efficient, facilitating better model interpretability, controllability, and transferability across tasks and domains. Current research emphasizes disentangled representations, often achieved using techniques like variational autoencoders and contrastive learning, and focuses on developing robust metrics to evaluate the quality of these representations, particularly in complex data like multimodal healthcare data and 3D structures. These advancements are crucial for improving the performance and reliability of AI systems across diverse applications, from medical diagnosis to materials science and natural language processing.
Papers
DiMBERT: Learning Vision-Language Grounded Representations with Disentangled Multimodal-Attention
Fenglin Liu, Xian Wu, Shen Ge, Xuancheng Ren, Wei Fan, Xu Sun, Yuexian Zou
When does mixup promote local linearity in learned representations?
Arslan Chaudhry, Aditya Krishna Menon, Andreas Veit, Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar
Fashion-Specific Attributes Interpretation via Dual Gaussian Visual-Semantic Embedding
Ryotaro Shimizu, Masanari Kimura, Masayuki Goto
Low-Rank Representations Towards Classification Problem of Complex Networks
Murat Çelik, Ali Baran Taşdemir, Lale Özkahya
Solving Reasoning Tasks with a Slot Transformer
Ryan Faulkner, Daniel Zoran
On Representations of Mean-Field Variational Inference
Soumyadip Ghosh, Yingdong Lu, Tomasz Nowicki, Edith Zhang