Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Towards Ontology-Enhanced Representation Learning for Large Language Models
Francesco Ronzano, Jay Nanavati
FCOM: A Federated Collaborative Online Monitoring Framework via Representation Learning
Tanapol Kosolwattana, Huazheng Wang, Raed Al Kontar, Ying Lin
Relation Modeling and Distillation for Learning with Noisy Labels
Xiaming Che, Junlin Zhang, Zhuang Qi, Xin Qi
NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models
Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, Wei Ping
Spectral regularization for adversarially-robust representation learning
Sheng Yang, Jacob A. Zavatone-Veth, Cengiz Pehlevan
Diffusion Bridge AutoEncoders for Unsupervised Representation Learning
Yeongmin Kim, Kwanghyeon Lee, Minsang Park, Byeonghu Na, Il-Chul Moon
Extreme Compression of Adaptive Neural Images
Leo Hoshikawa, Marcos V. Conde, Takeshi Ohashi, Atsushi Irie