Robust Representation
Robust representation learning aims to create feature representations that are both informative and resilient to noise, variations, and adversarial attacks, improving the generalization and performance of machine learning models across diverse downstream tasks. Current research focuses on enhancing contrastive learning methods, often incorporating techniques like counterfactual image synthesis, adversarial training, and careful selection of data augmentations to improve robustness. These advancements are being applied across various modalities, including images, text, and time series data, leveraging architectures such as transformers, graph neural networks, and variational autoencoders. The resulting robust representations are crucial for improving the reliability and performance of AI systems in real-world applications, particularly in scenarios with limited labeled data or noisy inputs.
Papers
Learning General-Purpose Biomedical Volume Representations using Randomized Synthesis
Neel Dey, Benjamin Billot, Hallee E. Wong, Clinton J. Wang, Mengwei Ren, P. Ellen Grant, Adrian V. Dalca, Polina Golland
Learning predictable and robust neural representations by straightening image sequences
Xueyan Niu, Cristina Savin, Eero P. Simoncelli