Distribution Performance
Distribution performance in machine learning focuses on improving the robustness of models when encountering data that differs from the training distribution (out-of-distribution or OOD data). Current research emphasizes evaluating and predicting OOD performance, exploring techniques like ensemble methods, dropout regularization, and spectral adaptation of model weights, often within the context of pre-trained models fine-tuned on smaller datasets. This research is crucial for building trustworthy and reliable AI systems, particularly in high-stakes applications where unexpected data is common, and for advancing our understanding of model generalization beyond the idealized independent and identically distributed (IID) assumption.
Papers
October 7, 2024
April 2, 2024
March 4, 2024
March 1, 2024
December 29, 2023
November 15, 2022
October 20, 2022
October 14, 2022
March 24, 2022