Unseen Test Distribution
Unseen test distribution research focuses on improving machine learning model performance when encountering data significantly different from the training data. Current efforts concentrate on developing robust algorithms and models that generalize well to these unseen distributions, employing techniques like mixture-of-experts, quantile risk minimization, and test-time adaptation methods that leverage self-supervision or prototype alignment. This research is crucial for building reliable and trustworthy AI systems applicable to real-world scenarios where data variability is inevitable, impacting fields ranging from anomaly detection to image synthesis and federated learning.
Papers
September 9, 2024
May 30, 2024
April 25, 2024
May 9, 2023
November 21, 2022
October 13, 2022
July 26, 2022
July 20, 2022
July 19, 2022
June 3, 2022
May 18, 2022
April 24, 2022
December 8, 2021
December 4, 2021