Generalized Representers
Generalized representers aim to create robust and explainable machine learning models by focusing on how individual training samples influence model predictions. Current research explores methods for constructing these representers, including the development of novel architectures like ONE-PEACE for multi-modal learning and the application of contrastive learning techniques enhanced by information bottleneck principles to improve generalization. This work is significant because it addresses the need for more interpretable and generalizable AI models, impacting fields ranging from image and text classification to face anti-spoofing by improving model accuracy and robustness across diverse datasets and modalities.
Papers
October 9, 2024
October 27, 2023
May 18, 2023
November 15, 2022