Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers - Page 38
Towards General-Purpose Representation Learning of Polygonal Geometries
Gengchen Mai, Chiyu Jiang, Weiwei Sun, Rui Zhu, Yao Xuan, Ling Cai, Krzysztof Janowicz, Stefano Ermon, Ni LaoHyper-Representations as Generative Models: Sampling Unseen Neural Network Weights
Konstantin Schürholt, Boris Knyazev, Xavier Giró-i-Nieto, Damian Borth
Unraveling Key Elements Underlying Molecular Property Prediction: A Systematic Study
Jianyuan Deng, Zhibo Yang, Hehe Wang, Iwao Ojima, Dimitris Samaras, Fusheng WangNeural-FacTOR: Neural Representation Learning for Website Fingerprinting Attack over TOR Anonymity
Haili Sun, Yan Huang, Lansheng Han, Xiang Long, Hongle Liu, Chunjie Zhou