Feature Dispersion
Feature dispersion, the spread of feature representations in a model's output space, is a key area of research impacting various machine learning applications. Current work focuses on leveraging feature dispersion to improve model calibration, particularly in large language models (LLMs) and vision-language models (VLMs) like CLIP, and to enhance robustness against adversarial attacks in recommender systems. Understanding and controlling feature dispersion is crucial for accurately quantifying model uncertainty and improving the reliability of predictions in diverse contexts, ultimately leading to more trustworthy and robust AI systems.
Papers
June 5, 2024
March 21, 2024
November 30, 2023
April 18, 2023
March 27, 2023
February 2, 2023