Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Efficient visual object representation using a biologically plausible spike-latency code and winner-take-all inhibition
Melani Sanchez-Garcia, Tushar Chauhan, Benoit R. Cottereau, Michael Beyeler
Heterformer: Transformer-based Deep Node Representation Learning on Heterogeneous Text-Rich Networks
Bowen Jin, Yu Zhang, Qi Zhu, Jiawei Han
Learning latent representations for operational nitrogen response rate prediction
Christos Pylianidis, Ioannis N. Athanasiadis
Practical Skills Demand Forecasting via Representation Learning of Temporal Dynamics
Maysa M. Garcia de Macedo, Wyatt Clarke, Eli Lucherini, Tyler Baldwin, Dilermando Queiroz Neto, Rogerio de Paula, Subhro Das
Visuomotor Control in Multi-Object Scenes Using Object-Aware Representations
Negin Heravi, Ayzaan Wahid, Corey Lynch, Pete Florence, Travis Armstrong, Jonathan Tompson, Pierre Sermanet, Jeannette Bohg, Debidatta Dwibedi
Unsupervised Driving Behavior Analysis using Representation Learning and Exploiting Group-based Training
Soma Bandyopadhyay, Anish Datta, Shruti Sachan, Arpan Pal
Representation Learning for Context-Dependent Decision-Making
Yuzhen Qin, Tommaso Menara, Samet Oymak, ShiNung Ching, Fabio Pasqualetti