Feature Dispersion

Feature dispersion, the spread of feature representations in a model's output space, is a key area of research impacting various machine learning applications. Current work focuses on leveraging feature dispersion to improve model calibration, particularly in large language models (LLMs) and vision-language models (VLMs) like CLIP, and to enhance robustness against adversarial attacks in recommender systems. Understanding and controlling feature dispersion is crucial for accurately quantifying model uncertainty and improving the reliability of predictions in diverse contexts, ultimately leading to more trustworthy and robust AI systems.

Papers