Robust Representation
Robust representation learning aims to create feature representations that are both informative and resilient to noise, variations, and adversarial attacks, improving the generalization and performance of machine learning models across diverse downstream tasks. Current research focuses on enhancing contrastive learning methods, often incorporating techniques like counterfactual image synthesis, adversarial training, and careful selection of data augmentations to improve robustness. These advancements are being applied across various modalities, including images, text, and time series data, leveraging architectures such as transformers, graph neural networks, and variational autoencoders. The resulting robust representations are crucial for improving the reliability and performance of AI systems in real-world applications, particularly in scenarios with limited labeled data or noisy inputs.
Papers
ReRoGCRL: Representation-based Robustness in Goal-Conditioned Reinforcement Learning
Xiangyu Yin, Sihao Wu, Jiaxu Liu, Meng Fang, Xingyu Zhao, Xiaowei Huang, Wenjie Ruan
Predictive variational autoencoder for learning robust representations of time-series data
Julia Huiming Wang, Dexter Tsin, Tatiana Engel